We sit down with Mohammed Billoo, founder of Mab Labs and author of the Embedded Linux Essentials Handbook, to explore the world of embedded Linux profiling and optimization. Mohammed shares hard-won lessons from the field, including debugging a scientific instrument that mysteriously crashed after 60-minute runs and optimizing a sophisticated MANET platform that took a 20% throughput hit. The conversation reveals a fundamental truth: in embedded Linux, the CPU is rarely the bottleneck. Mohammed walks us through his systematic approach to performance problems, starting with simple tools like HTOP before diving into specialized instrumentation. We discuss the critical difference between VM size and VM RSS for memory analysis, why dumping console output can kill boot times, and how to leverage kernel configurations for maximum diagnostic bang-for-buck. Mohammed emphasizes the importance of building instrumentation into systems from day one—not for premature optimization, but to give your future self the data needed when problems inevitably surface. The discussion also touches on how LLMs can accelerate the learning curve for complex tools like Valgrind and perf, while stressing that physical reality remains the ultimate arbiter of system performance.
We sit down with Mohammed Billoo, founder of Mab Labs and author of the Embedded Linux Essentials Handbook, to explore the world of embedded Linux profiling and optimization. Mohammed shares hard-won lessons from the field, including debugging a scientific instrument that mysteriously crashed after 60-minute runs and optimizing a sophisticated MANET platform that took a 20% throughput hit.
The conversation reveals a fundamental truth: in embedded Linux, the CPU is rarely the bottleneck. Mohammed walks us through his systematic approach to performance problems, starting with simple tools like HTOP before diving into specialized instrumentation. We discuss the critical difference between VM size and VM RSS for memory analysis, why dumping console output can kill boot times, and how to leverage kernel configurations for maximum diagnostic bang-for-buck. Mohammed emphasizes the importance of building instrumentation into systems from day one—not for premature optimization, but to give your future self the data needed when problems inevitably surface. The discussion also touches on how LLMs can accelerate the learning curve for complex tools like Valgrind and perf, while stressing that physical reality remains the ultimate arbiter of system performance.
"When you first get started, you have generally this arrogance that like, oh, it works fine. I've tested it. It's good to go. But then as you get more experience, as you become a more senior-level engineer, that arrogance, you start to kind of strip away a lot of that arrogance. You get humbled pretty quickly." — Mohammed Billoo
"The CPU is very rarely the bottleneck because it's meant to, and the drivers are implemented in Linux in such a way that they're intelligent enough that they can hand off a lot of the things of the CPU to coprocessors so that the CPU is really idle." — Mohammed Billoo
"I don't convince myself of a claim that I'm making until I have data to back it up. So I don't say, oh, you know, this is working fine. Like, well, again, what does fine mean? Or, you know, what does well mean? And what is the data to prove that?" — Mohammed Billoo