In this episode of the Agile Embedded Podcast, hosts Jeff Gable and Luca Ingianni discuss how the principles from the book 'Accelerate: The Science of Lean Software and DevOps,' by Nicole Forsgren, Jez Humble, and Gene Kim, apply to embedded systems development. A listener prompted the discussion by asking about the book's relevance to embedded systems. Jeff recently read the book at Luca's suggestion, and they plan to provide an overview of the book and examine what differentiates embedded systems development.
Applying 'Accelerate' Principles to Embedded Systems | Agile Embedded Podcast
Welcome to the latest episode of the Agile Embedded Podcast with Jeff Gable and Luca Ingianni! In this episode, we address a listener's question about the book 'Accelerate' by Nicole Forsgren, Jez Humble, and Gene Kim. Jeff and Luca delve into how the principles from this book, which focuses on Lean Software and DevOps, can be applied to embedded systems development. They discuss the nuances of embedded systems, the relevance of DORA metrics, and share insights on how capabilities and processes from the book translate to the unique challenges of embedded systems. Tune in to understand how you can adapt and implement these best practices in your projects.
00:00 Introduction to the Agile Embedded Podcast
00:06 Overview of the Book 'Accelerate'
00:50 Research Methodology and Key Findings
02:56 DORA Metrics Explained
05:30 Key Capabilities for Effective Organizations
18:41 Applying 'Accelerate' Principles to Embedded Systems
20:19 Challenges and Considerations in Embedded Systems
34:10 The Importance of Logging and Feedback Loops
37:43 Empowering Teams and Encouraging Experimentation
41:58 Final Thoughts and Recommendations
📍 Hello and welcome to the Agile Embedded Podcast. I'm Jeff Gable. And I'm Luca Ingianni. And today we are tackling a listener question asking about the book Accelerate. And this is written by Nicole Forsgren, Jez Humble, and Gene Kim. It's called Accelerate the Science of Lean Software and DevOps Building and Scaling High Performing Technology Organizations.
And a listener wrote in and asked, essentially how we thought the principles in this book applied to embedded systems development. And Luca's been bugging me to read this book for a while and I finally sat down this week and did it. And so today we're going to talk about it and maybe what, an overview of the book, but then also what makes embedded systems development special or not in a way.
Actually, Jeff, be honest, did you read the entire thing?
I did not read the entire thing. I would say I, and I'll get into that because, so the point of this book was these three people spent a lot of time doing actual research on a wide swath of organizations to do this. To do this scientifically, you say they, they had these ideas of DevOps practices that made organizations more capable and efficient, and I shouldn't say efficient, I should say capable, and they spent a lot of time building the research methodology which would actually enable them to answer that question with rigor as opposed to just being hand wavy and saying, yeah, Agile's good, but trying to be more specific about it.
I will say I skimmed very lightly all of their justification of their research methodology but I did read through their conclusions and their intros and the part on how to actually apply these principles in a transformation. So no, I did not read the entire thing. carefully, but I read the important parts.
Put it that way.
As you should. I, this is exactly my experience as well. It's a fairly fat book. It's what, three, 400 pages by memory. But you really only need like 120 of them. Where the meat is, where the practical stuff is. And the rest of it is, I'm sure very, very elaborate and very impressive and scientifically very well made, but, frankly I didn't read it either.
And I suspect few people have, because bottom line I think we can trust the authors.
So there's two parts of that. So I want to talk about give an overview of their conclusions and the key capabilities they identify that leads to effective organizations, and many of them are things that we've talked about a lot.
But I also want to capture a discussion about the metrics they use to actually measure these things. And Luca, you had some strong, positive feelings on those, the DORA metrics. What does DORA stand for?
DORA is the DevOps Research, what, Institute? I don't know where the A comes from. Association, yeah.
Yeah, exactly. Which is somehow in cahoots with Google. I don't know the details. I think they are somehow independent, but paid for by Google or something like that. I don't know. Long story short. So maybe to give a bit of background, This book Accelerate came from research that the authors did in, originally in 2014 to 2017.
Those were the famous state of DevOps reports. At that time, they were still run by Puppet Labs, where they surveyed thousands of organizations across the globe, big and small, old and new modern and old fashioned, and try to
And different industries.
Oh, yes, exactly. And try to figure out what practices they applied and how it was working for them, try to find commonalities and long story short they found that they had through three, broadly speaking, groups.
of organizations, low performers, medium performers, and high performers. Plus I think they had like unicorns and those all shared similar characteristics in terms of their capabilities and also in terms of the behaviors they're exhibited. For instance, how often they were able to deploy to production.
With the low performers, between once a month and once every six months and by contrast, the high performers a couple of times a day. So they did this whole bunch of research and then they distilled it into a book, which is the book Accelerate. And this maybe also explains why they have a list of capabilities, which was just the that they identified as meaningful.
And a set of, metrics, which are now commonly known as the DORA metrics, which they used during their research and which have proven to be quite powerful to figure out how well your organization is doing. Sure.
And let's say what those DORA metrics are. So there's, they're grouped into, there's four of them and they were grouped into two subgroups.
One is performance. And so the two metrics are the delivery lead time. So that's essentially from the time you identify a change you want to make, how long until that is actually deployed in front of a customer. And then the other one is deployment frequency. How many deployments do you make per unit of time, per day, per week, per month?
And then the other two the other side of the coin is stability. So you could have excellent deployment frequency, but that could be a symptom of thrashing in your organization where you're constantly layering fixes on and breaking things. And so the other side of that coin is the time to restore service.
So this doesn't maybe apply to embedded systems as much, maybe IOT devices, but essentially if you have an outage, how long until you've restored service and the change fail rate how many of your changes induce a failure.
Exactly. And that was one of the things that I found quite insightful that you really shouldn't look at any of those metrics in isolation.
Like it would be easy to stare only at deployment frequency and say, this is something we need to work on. We should get our deployment frequency. That's easy. I'm just going to be sloppy and kick half done stuff out the door. Done. There's your higher deployment frequency. But of course that is not in anyone's interest.
And so if you couple it with a change fail rate, then you're onto something because you can say how far can I push my deployment frequency without a meaningful or problematic change in the change fail rate?
And one of the big thesis of the DevOps movement and also of this podcast is the more frequently you deploy with automated protections in place, the lower, the smaller and lower risk those individual changes are and therefore.
a lower percentage of them will actually cause problems. And but you're right, if you did that naively, if you just essentially put no filters between a commit and actually deploying the code in front of customers, then most of them would probably fail because developers make mistakes because they're human.
Yes. And, you could do something unwise, such as, coupling somebody's bonus to the deployment frequency and just be amazed at how quickly somebody can deploy if their salary depends on it. Yes, the point is, those four metrics really need to be looked at together, and then they will be a helpful guide to improve your practice.
But only then.
And then, maybe speak to some of the key capabilities. So I don't know, we're only going to go through all of them, because there's 24 grouped into five categories. but maybe give a quick overview of those and then we can start to talk about how embedded systems is special or not as the case may be.
Yeah so they are interesting enough that we should and probably in fact have spoken about them, but they go from, let's say the organizational or rather from the technical to the organizational, to the process, to the cultural. So as far as technical, there's stuff in there you should have test automation.
that is a capability you need to strengthen if you want to improve the four metrics. You should have version control. And, again, you should think that this is a given in the year 2024, but apparently it isn't. And actually, it's a bit of a challenge because I think by now, everybody has some kind of version control, but in the book they are stricter, they have, they say version control for all artifacts.
Right.
Which includes things like, I don't know, firmware binary blobs and I don't know, graphics etc. So that's it, that's where it gets more challenging.
So I, I would say the instantiation of that for the embedded systems industry is to version control your tool chains. The easiest way to do that is have docker images of your compiler and Every other aspect of your toolchain that you actually use to build and release, build and test and release software Debuggers, eh, I think debugging is a purely Development activity and so that can be even up to individual engineers preferences and you can have instructions on a this is a way to do our default debugger, but if you have other preferences, fine.
I have less strong feelings on that, but absolutely everything that's part of your build pipeline really can only be part of your build pipeline. If it's actually captured in version code, that's not true. People could build a Jenkins, a dedicated Jenkins build server. And that's the problem with Jenkins is that Jenkins configurations are often not captured in version control.
And so I would say, your build and test pipeline and all of the tools associated with that all need to be captured in version control for you to really have achieved that capability. Agreed or no?
Exactly. So long story short, I think to each of the capabilities we're calling out, there's a lot of nuance and a lot of stuff we could talk about.
But yeah, so just to call out a few, I'm not going to talk about all 24. So you've got the sort of very basic technical capabilities such as version control, continuous delivery, test automation, and so forth. You've got architecture capabilities such as loosely coupled architecture and interestingly empowered teams.
They list this under architecture capabilities. You have product and process capabilities, such as being able to work in small batches to perform experiments and so on. You need management capabilities, such as being able to limit work in progress, having things like a lightweight change approval process.
That, that particular one I want to touch on before you go on. And so I remember this is from the Phoenix project, which was written by I think Jez Humble and Jean Kim, not Nicole, but the other two were involved in the writing of the Phoenix project. Yep. Which was, one of the books that, that.
you know it was a business novel and described this DevOps transformation of an organization and they talked about the power of their change approval board. In that book, capturing, essentially able to mediate conflicts and prevent these deployment errors where teams were not communicating correctly.
And this was interesting that they through the course of this research, I think, came to the conclusion that a change approval board, an external change approval board was actually counterproductive, and that you should achieve this. No, no one developer should be able to push a change to production, but if essentially if you have just peer review by an experienced reviewer, that was enough to prevent most problems from getting through and provide that right balance between stability and throughput, essentially.
I thought that was an interesting conclusion that they came to as a result of their research.
Exactly. Because if you've got a central board like this, then your lead time skyrockets, your deployment frequency craters your time to restore service is in peril. It's a bottleneck. So essentially it's.
It's bad for everything but the change fail rate, because, let's assume that this change approval board actually performs a useful function, then they will hopefully detect, defects and abort a change that would, have an actual impact. So yes, it does have a positive effect, it does increase stability, but I think the argument is at too high a cost in terms of the other metrics.
This was interesting because lightweight change approval. It itself sounds very bureaucratic, but I think the point is, it shouldn't be. It shouldn't be, just like you say, something really informal, a quick peer review, just to iron out. the the most egregious issues and, as a matter of course, test automation, et cetera, should catch, all of the common bugs and the review really should focus on other things.
Change approval should focus on other things. And of course if you need the change approval, because you have so many dependencies and you need to ensure that your change doesn't break something else. Then you have a different problem, which is expressed, through the architecture capabilities of having a loosely coupled architecture.
If you don't have that, then, and and there we go, all of this sort of builds on one another or forms this web. of interconnected capabilities. If you have a loosely coupled architecture, then you can afford to have lightweight change approval, and then you can afford to deploy frequently without any problems, and so on.
This is the fun about DevOps. It is so powerful. But at the same time it's there's so much you have to keep in mind, and that's what makes it daunting. This is what gives us its power, right? But if you understand DevOps to be more than just standing up a Jenkins server, then it has a lot of intricacies and nuance to it.
And I think what we're going to do after we go through those capabilities is to focus on embedded systems and try to figure out This book was written not with embedded systems in mind. It was written for just general software, presumably software that runs in data centers or in the cloud or something.
But how much of what is being talked about here applies or makes sense in the context of embedded systems. Anyway, before we get sidetracked even worse, let's just quickly finish the capabilities. So the last set they have is what they call cultural capabilities, where they call out things like fostering a generative culture, encouraging learning.
And generative culture is something else that would go too far. There's there's a gentleman called Westrom who came up with this model of different cultures. And generative culture is the best kind of culture you can have. Collaboration amongst teams. And the interesting question is, if you have a loosely coupled architecture, do you need a collaboration amongst teams?
Support transformational leadership. And what I find interesting also is job satisfaction. So this is quite explicitly, a. Valuable capability of an organization. to make the work enjoyable.
Shocking.
Yes.
Yeah. The brief bullet point under that, this, they say this particular measure of job satisfaction is about doing work that is challenging and meaningful and being empowered to exercise your skills and judgment.
It's also about being given the tools and resources needed to do your job well. Pretty straightforward. Yeah, you would think that
people should have, should be aware of that by now, but it's not quite that easy, is it?
Sure. All right. So we've got these 24 capabilities in these five categories.
We've got our four metrics that they use to really essentially measure the performance. impact of all of the different factors that they distilled down to into these 24 capabilities. Essentially saying all of these 24 capabilities, especially the more of them you build in, the better those four important metrics are going to be.
And those four metrics are shown to be, very indicative of the capability and success of the, at least the IT operation in your company. So what makes embedded systems special? And maybe aspects of embedded systems development where this applies perfectly, or we have to modify it a little bit.
I would say one, one axis that we need to consider is there are some embedded systems. Especially anything that's an IOT device very much fits into, it's almost similar to a web app in that you could achieve continuous delivery. You could, push it as, as far as any web application, your cloud application goes, in terms of your deployment frequency and lead time in terms of getting code actually running on what a user is using.
But a lot of embedded systems are not that way. Some are in between where maybe it's a medical device. These, I work on these a lot, medical devices that are not Internet connected, but they can have software updates via, shipping USB keys to your customer or something like that, or having service technicians visit on site.
And then there are some that will never be updated, that toys or anything that's pre packaged and delivered to a customer, you're never gonna see that again. it will never get a software update. And so there is a limitation there that I think means that the principles from this book apply in a different way.
Any other, what other aspects do you see of embedded systems that make it special that's worth consideration when considering how these principles apply?
There's the sort of general caveat that if you're working on an embedded system, then you've by definition got hardware involved.
That just. skews metrics in a certain direction. Like maybe you can't have such a short lead time for deployment or for delivery because, just physically creating a new prototype board takes time, more time than, a regular compiler run would. But other than that, yeah. So just like you say, there are some classes of embedded systems where I would say, everything in the book just applies straightforward.
Everything that is networked. All modern IOT type devices like, like this electric scooter, I think we, we had on the show once.
Yes.
And then there is the opposite where you cannot deliver more than once, just like you said you create the device and then at some point a truck comes and takes it away and then it's just gone.
And the question is, in that case. What is the meaning of delivery? Is that still something that we look at? Do we need to redefine it? Or does it just evaporate and it is and has become meaningless?
And so I think maybe at that point I've talked about in the past kind of running an agile organization within the product development organization, maybe within a larger organization that's not so agile.
Like from a business standpoint at some point, maybe. If you're selling these products that get on a truck and you never see them again, yes, at that point, you no longer, you can get customer feedback and fold it into a future version of the product. And yes, that's something you should try to achieve as a company.
But up until then, I guess until the day the device rolls off the manufacturing line, you at least have the opportunity to implement all of these practices within your product development phase. There is a phase that will end at least when a certain version of your product rolls off the line and gets on a truck.
But during that phase, you can, have as, as many automated tests and deploy to devices that are in QA as frequently as you can. And so I would think a lot of these systems can be applied in that microcosm. What do you think? Does that make you uncomfortable or am I not pushing it far enough?
What do you think?
I think you're making an interesting point here, which is that with quote unquote regular systems, It's easy to draw the system boundaries, right? You can say deployment always means deployment onto my web server and my web server is under my control and I can deploy it any time.
Whereas, with maybe more limited devices, simple embedded systems that don't have networking capabilities and over the air updates and that stuff. You maybe need to actually think of different systems with different boundaries. Just like you said, in development, you should be able to deploy to your test bed, to your, I don't know, hardware in the loop rig or whatever you have, with tremendous frequency.
and with very low lead time. If if it takes you weeks to deploy onto your hardware in the loop testbed then something is wrong. That should be essentially instant. And in those cases, you would also meaningfully be able to track things like a change fail rate, how often do you deploy into the hill and it just, the hill just breaks.
And then, time to restore service, how long does it take you to get back up and running and reclose the feedback loop? But there is also, if you cast a wider net, you can say even in the case of a gadget that will never ever be updated because it's, it's a Tamagotchi and it costs like five bucks and kids play with it and flush it down the toilet and whatever.
And it will just never be updated because. Because just no. Should you still think about, for instance, deployment frequency in terms of how often you update the design that rolls off your line? and then gets put on a truck. And similarly with delivery lead time. If somebody has a new idea for a, colored Tamagotchi I wonder if our listeners know what a Tamagotchi is, I don't know what
a
Tamagotchi is. You don't know what a Tamagotchi is? That was old rage. When I was at school, it was like this electronic toy. It was like a beeper. I wonder how many listeners know what a beeper is. Those devices that you could send little numbers to,
like a pager.
Yes,
exactly. A pager. So this was a toy, not unlike a pager. It was this tiny thing and it had a very simple LCD display and it was a digital pet and you would have to feed it and play with it. And if you didn't, then eventually it would die. Anyway, they were all the rage when I was, I don't know, 13, I can't quite remember. And that is just the archetypical, embedded system that gets sold and then never thought about anymore. as far as the manufacturer goes. But still, even with those things, shouldn't you track things like maybe no time to restore service. Ah, actually you could, you could define your system as, if somebody has a defective device, a defective Tamagotchi, how long does it take for them to get you to give you, give them a new one?
It looks very different. Restoring service in that sense means shipping the customer a new device that's not broken. That sort of thing. Sure. Long story short, I, some metrics still work straightforwardly, but the interest and the interesting thing is you may end up thinking of different systems with different boundaries.
where the metrics, get looked at differently. Within the development organization, just like you said, they essentially retain their traditional meaning.
And I would say, that, that moment when you have to, when you deploy to an embedded system and you don't get a chance to take it back when it rolls off the line and rolls off rolls off a truck, the risk at that moment is higher because you don't have the chance to fix it.
So you want to have very high confidence. that your firmware has as few bugs as possible and no, no deal breaking bugs, as it were. So the more often you have deployed internally up until that point and practiced your, exercised your test suite and exercised your QA process and reduced the size of those internal deployments where the very last deployment you made.
Deployment meaning, to your of the final binary that's actually going to get flashed on the manufacturing line into these devices. The very last change you made before you don't get to go back, it would sure be nice if that change was pretty small and not, Oh, we just let 14 different features that we've been working on for months drop and now we've got to freeze it for, two months to exercise this, but we don't have time to do that, so that's just a recipe for sweaty palms and not being confident that you've, let a problem split through or slip through.
Yeah, I think that's an interesting point. If you've got a system like that, where you've got only one chance to get it right, deploy working software onto it and chip it, and then it's just gone. I think you need during development, something that does a system that does very well in terms of the Dora metrics, very short delivery times, very high deployment frequencies very small change failure rates, et cetera.
so that, just like you said, you have a chance of shaking out all the bugs before you go to final flashing and then, and then all hope is lost.
Yeah, you're rehearsing that, that final flash that you can't get back. You want to have that process to be very well rehearsed. And that's what a rapid deployment frequency during development is rehearsals.
What else? So that, that was the main thing. We talked about, some systems, you have this higher risk level where you have much less frequent, less frequent updates, if at all, once they're in the field. What about just the funnel limitations of hardware? You have to deal with mechanical engineers and electrical engineers.
Don't you just hate that? Those freaking mechanical engineers. But we're laughing, both of us were trained as mechanical engineers.
Yes, indeed. Yeah that's an interesting thing, right? Because if you remember capabilities, One of the two architecture capabilities that was called out was loosely coupled architectures and empowered teams.
So that was both of them actually but they're related, aren't they? And the thing is, if you've got an electronics team working on, say, a sensor package, and a software team working on integrating that sensor, then they are anything but loosely coupled, are they?
And we've even had an early episode gosh, this might've been, what, episode three or four, two years ago? Oh dear. Long time ago now. Essentially on cross functional teams and how effective they were essentially making sure everyone that was necessary, whose skill set was necessary to create a working product should as much as possible, within the limits of your organization's scale, but as much as possible be together and the electrical engineer and the mechanical engineer and the firmware engineer should be side by side working on this product line as opposed to these functional teams where you have the EE team and the mechanical team and the firmware team all working on four different product lines.
much less effective that way.
Exactly. And of course, not just less effective, but also everything gets more stiff, right? You have much longer lead times, because the EE, guys and girls need to work their stuff, and then they hand it over to software, and then software needs to do their thing, and before you know it, two months are up.
And at the same time, it's very tightly coupled. The electrical engineers can't just decide to go with a different sensor, let's say, or with a different interface or something, for whatever good reason they come up with. without having to hash it out with the software people. But what if by contrast, they were all on the same team?
They were on the sensor team, and it contained electrical engineers who took care of the electrons flowing, and they had software engineers on that same team, side by side, physically or virtually, taking care of the bits and bytes, and, for all I know, maybe they've got a couple of physicists taking care of the photons or whatever it is that they're measuring.
How quickly they would be able to change direction. if they decided that, okay, fine, we need a different sensor. This one is not cutting at all. Or we have a supply chain issues that we can't actually use this sensor, even though we want to, because we can't buy it. This is how deep the rabbit hole goes, right?
If you want to improve something as innocuous as your delivery lead time, maybe you need to reshuffle architecture elements. Maybe you need to reshuffle your teams. Maybe you need to rethink responsibilities. in order to get there.
Yeah, absolutely.
And I think this sort of thing happens more frequently in embedded systems, just because you've got so many.
cross cutting problems. You've got physicists and mechanical engineers and electrical engineers and software engineers all involved in a single capability, perhaps.
Another one that I wanted to touch on and we talked a little bit about this the scale of being able to update embedded systems ranges from IOT devices where you could theoretically do continuous deployment, although down way down to the ones we've talked about where they get on a truck and you never see them again.
In between. is a really interesting area, and this comes up a lot in my work, so I work on medical devices, they are often not networked, they would, but, say we build a small manufacturing run, and they're out with essentially beta users, or maybe they're in a clinical trial or the phase after a clinical trial where you're essentially doing final user stage testing before you do your FDA submission or regulatory submission.
So these devices they're out there, they're being used in real world scenarios. and then you will get them back, or can get them back, especially if there's a problem. And this speaks to the, where I'm leading this is one of the key product and process capabilities that they have is to gather and implement customer feedback.
And where I'm going with this is technical capabilities that you build in early will then pay huge dividends later in terms of, logging. So if you don't have the, if your device is not networked and you don't say have Memfault, we had Francois from Memfault on where they, they'll dump you core dumps and crash reports and statistics and all that thing live from your IoT devices.
If you're not networked and you can't get those things, what you can do is build in the capability to store them on your device. Law, especially whenever you have a fault, you should take a snapshot of the system state and maybe, several time cycles leading up to that. You can log statistics of your operation over time.
And so it's not fast because you do have to physically get a device back or have a service technician plug a USB and thing into a device and download the logs, but it's a lot better than nothing. And I've, I have worked on programs that are both ways where we had nothing on the device and someone would say it went, it threw this fault code and we're like, okay, we'll try to reproduce it in the lab.
Or I've worked on devices where it's a pain because the device is not networked, but someone can download the logs, send you the logs, and then you have a really good chance at figuring out what went wrong and being able to make a change. and go out and flash those devices. Because you have a limited number, they're in beta testing, and you can, that's something you can measure.
How often they, during this beta testing, when the volume is low, and it is actually feasible to go out and physically update all of these devices, these medium, low to medium volume devices you have a chance of actually going out and doing those updates, and you can measure that. And having that logging capability is something technical that you as an embedded systems developer can do that would then enable a more rapid, essentially that that the lead time, essentially, they, you get a fault report.
How soon can you issue a fix and have it on the device? Thoughts about that?
Yeah, that's certainly something that is very important to be able to close those feedback loops and close them early and close them quickly and close them with appropriate throughput. In this may mean, as you said, different things in different situations.
But what I also found interesting was if you look at product and process capabilities, they call out team experimentation. That's innocuous, but it actually means that in a larger system with potentially multiple teams working on multiple aspects of this system each team should be safely able to experiment.
without asking anyone for permission or anything like that. If they want to switch something around in their part of the project, they just can. And I think that is just a lot more challenging in, in embedded systems because of the longer communications lines that you just talked about.
Yeah, they're all the capabilities feed into each other, as much as you can have loosely coupled architecture where people then have the freedom to experiment within their own boundaries that enables that. But I think this is, calling this foster and enable team experimentation, it's the name of their capability, and calling that out specifically is that essentially you shouldn't get in trouble for frying, trying new things.
There's architecture capabilities that enable that, but then from a process and cultural standpoint, you should be allowed and empowered and encouraged to try new things. and if they work, fantastic, then they actually get folded in the product. And if they fail, everyone says that was a nice try, but nope.
And rather than, blaming people for wasting time trying something new, there's obviously a balance there, but I think that's a, an indicator of a healthy organization when people feel that freedom to try something out and maybe it will really be a, turn out to be a powerful capability for the product.
Yeah, and it can be quite difficult to really find ways to experiment safely and meaningfully. I've got a similar problem, that I create a lot of workshop products and training products and the like, and I've always been wondering how can I have an training MVP? What would a minimal viable training look like?
I can't just show up at the customer site and say, Oh, here's five slides. If you like those, I'll make, 10 more or something. This is not how it works, is it? And it's similar with embedded systems devices. You can't just say, Oh, here's half the device. If you like it, you'll get the other half later,
Yeah. So my point was you need to be creative about how you can make the, how you can find ways to iterate quickly and to not have to build the entire thing before you can finally get it out there and and receive feedback. And that's just a little bit harder than than in, in regular software where it's, hard and scary enough, frankly.
.
But it's even more important. in such situations, isn't it?
Yeah, like we said, anytime when the risk is higher, you, finding ways to iterate more quickly reduces the risk. The content of each individual change is lower and you get rehearsal time and you just gain confidence.
Yeah, this was one of the interesting things about Accelerate that they could mathematically prove and not just assert that higher velocity gives higher quality. So if you can iterate more quickly, if you can deliver more quickly, then that is not as some people might fear. detrimental to quality.
In fact, the opposite is true. It enables you to achieve higher quality because you can shake out issues more quickly, because you can learn more about your product and your customer's needs more quickly.
Amen. We're running a little long on time. Anything else you want to cover before we wrap up?
No, I think I just want to think about this broadly and think about, okay, is this a relevant book for embedded systems development?
If so, in what way? It's still fresh in your mind. What do you say?
Yes.
That was easy.
That was easy. Done. Yeah. Listeners I would recommend again, the book is called Accelerate. I would recommend it. And I would recommend you do not read it front to back. That you that you essentially skim it and zoom in on the parts that are intriguing to you because there, there is a lot of density that I think is.
Not value add to someone who is already on board with the principles and you want to zoom in straight to the meat and cut to the chase. For someone who is maybe, skeptical that any of these principles can apply then maybe, if you're willing to actually put in the time to, to read the book a little more carefully, especially in their justification and the description of the research methodology, knock yourself out.
But overall I think the recommendations that they have do apply very well to our industry. You just may need to be a little more creative in how you apply them. due to the limitations.
Exactly. And also let me point out that this is one of the books that are really good to read multiple times over the years.
So in preparing for this episode I skimmed over it once more and I spotted things that didn't catch my eye before and where I thought, Oh, hang on, there's actually more nuance to this than I thought when I first read it. So it's, as you said Jeff, it's a very dense book and it contains a lot of excellent advice.
And, of course, it is absolutely relevant to embedded systems. And I think a lot of the advice that it gives can just be taken unchanged.
Yes. And the things that can't, I think, can work very, do apply directly just with a little bit of modification. But most of them apply. With no modification at all.
Great. All right, Luca, where can people go to find you online?
You can go to luca. engineer and I promise that's a real website. That's a, I have to preface this because it sounds like it's fake, but it's not. Engineer is a proper top level domain. I happen to have luca. engineer, that is an excellent jumping off point to find my other podcasts.
This is not the only podcast I have to find my website to find my contact info in case you want to get in contact with me. Jeff, what about you?
You can go to jeffgable. com and especially if you're in the medical device industry and you are looking for help essentially in either bringing a product to market or in making your software team and firmware teams more effective, I can help you out.
So please reach out. All right. This Podcast. I'm Jeff Gable. And I'm Luca Ingianni. And we will see you next time. Thank you.