views:

1031

answers:

7

Embedded software development has its own set of unique development challenges. What best practices have you found that work and what practices do not work so well?

For example,

I have found that:

  1. a well layered approach is essential for testing embedded systems. This allows some code to be unit tested on a more capable target such as a PC that would not easily be unit testable.
  2. automated continuous integration / testing on the embedded target is not likely worth the effort.
+9  A: 

I've written code for 8-bit 20Mhz PIC processors and 32-bit 200Mhz ARM processors. Techniques differ depending on just how small the environment is. But assuming you're talking about teeny processors and writing C code, here's what I've found:

  1. Unit tests can run on the desktop, but be sure you're using explicit integer types (e.g. uint32_t, sint8_t) and not int and short. These will be defined differently on the desktop than in the embedded environment, so algorithms that might e.g. wrap when sizeof(int) == 2 but pass a unit test when sizeof(int) == 4.
  2. Unit tests cannot be overlooked. It's hard to use debuggers and loggers when you have real-time applications, so getting rid of little bugs is important.
  3. Never allocate memory. "Never" is extreme, but I've written code for 18 different embedded applications and never once had to use malloc(). No allocation means no memory leaks. Every limit should be in a #define so you can change your mind, but you'll be surprised how often you don't need strings to be "any number" of characters in length.
  4. Simple wins! Simple algorithms, going complex only if a profiler or similar dictates that you must. Simple architecture -- pointers and indirection and dynamic structures are extra slow in embedded environments.
  5. Profile. The littlest things can have a big effect. In one application we were getting only 12 k/s in a test where we transmitted a 1 MB file over the web server. Eliminating some unnecessary memcpy()s bought us 40 k/s, but then we discovered that the library implementation of memcpy() was "dumb" with things like word-aligned moves. After a little Internet research and a little assembly code, we sped up memcpy() so much that the test now ran at 140 k/s!
  6. Separation of concerns. Modules and encapsulation are always useful to some extent, but for embedded environments it works even better. You often know the complete set of ways and contexts that a piece of code will be used, so you really can test and document it thoroughly unlike with e.g. a Java class in a complex web-app with constantly changing requirements.
  7. Test modes. This can range from artificial tight loops where you do an operation as fast as possible or mock-inputs. Test modes allow you to run code millions of times over a weekend. This matters in embedded development because when you have to communicate with e.g. some microchip you can't test that with unit tests. Sure there's the spec, but there's line noise and hidden bugs and incorrect voltages and all sorts of things that can't be captured in unit tests. You just have to burn it in.
Jason Cohen
+2  A: 

I like Jason's answers, but I think eschewing automated testing as you proposed in your answer is a bad idea when the target environment becomes larger and more complex.

I think you have to really think about the size and complexity of individual embedded systems before coming to some conclusions about best practice.

Here's an interesting contrast, look at the system requirements for Windows 95

Compare those requirements with a modern smart phone such as Nokia n95, that'll have 128MB RAM (iirc), a ROM size of about 90MB. This is considered an embedded evironment too.

These are large systems, anyone not considering a high level of automated testing in these environments is probably going to develop some pretty low quality products.

So here's my answer: there's really very few good practices which can't be translated from non-embedded systems to the embedded environment Admittedly there maybe some significant effort to create these environments which for 'simple' systems may not be necessary.

tonylo
+1  A: 

I was thinking about the small 8 bit microcontrollers where the differences are more apparent.

In an embedded system you have code which is tied tightly to the hardware and (hopefully) the rest of the code which is not tied to the hardware and instead using the abstracted hardware layer.

The code in the upper layer should architected such that it could be built on a build server with continuous integration / unit testing off target.

The hardware dependent code cannot be tested this way. I suppose it can be built via continuous integration targeting the hardware to verify the build is not broken. But you would need emulation to perform unit tests.

Hopefully, there is confidence in the hardware layer through manual testing and the code is re-usable from project to project. It helps if you do not switch platforms frequently.

Does anyone use emulation for automated testing or automated testing of code via a hardware test jig?

JeffV
+1  A: 

To expand on the OP examples, consider that an embedded system running on custom hardware often contains elements that are not present in a PC or smartphone application, such as control logic and direct hardware interfaces. To make fully automated tests might require the construction of a special test hardware rig, or the development of a simulator to recreate the embedded environment. In many cases this is impractical or unwarranted, perhaps requiring more development resources than the actual product.

In the case of a system that combines data processing with control and hardware, it is often helpful to separate the data processing into its own layer, so it can be recompiled for a PC platform where the full range of development tools and methods are available.

Another important practice in embedded programming is to learn about the mechanisms, strengths and weaknesses of the embedded hardware. Know the size and number of registers, the instruction set features, arithmetic operators in particular. (For example, many embedded processors have no floating point unit or integer divide instruction.) Be aware of how best to write code so the compiler can generate efficient instructions. Learn the speed of various types of memory accesses. When designing algorithms, prefer simplicity wherever possible; often in embedded systems the number of elements to be processed is guaranteed to be small, and almost everything is fast for small N. Usually the best algorithm is one that requires minimum constant space and so allows you to avoid allocating memory.

smh
When the hardware integration gets sufficiently complex in my experience it is worth going developing a automated test environment if the hw has the capacity. Most of the testing i've been involved with has been the regression testing of base port/board support package i.e. very h/w centric
tonylo
+3  A: 

Unit testing can't easily (or practically) be performed on the target, but integration testing can.

Vehicle ECUs require in system testing, and the project I'm working on at the moment is building a HIL (Hardware in the Loop) tester, with continuous integration and testing.

There are several USB I/O devices connected to the ECU, and a debugger. Software on the PC runs testing scripts which toggle real I/O, vary voltages and loads, and read inputs, take measurements, etc.

I'm working on having a system pull the commits down, recompile, reprogram the target, and run all the tests. The nature of the software requires a long test (lights remain on for so many seconds after the doors close, etc) so real continuous integration is unlikely, but a nightly test of all the latest commits is doable.

This would save vast amounts of time in debugging and hand testing - right now when a new baseline comes out everyone has to hand test their code. Many don't, and it may be several baselines later before the error is found, and then it must be tracked down.

This is on a 16 bit processor, but the principles and application are the same across the gamut, the difference is that people don't want to spend a lot of time and money on this sort of solution when there's only one hardware guy and one software guy doing the majority of the work, and the projects are small enough that a test by hand of all the features takes minutes instead of hours.

Further, it offers great traceability to the customer. Each test case has a list of requirements it tests. At the end of a regression test the program generates reams of HTML reports with all the requirements that passed and failed, graphs of the I/O for that test over the period of the test, etc.

So... Yeah, automated testing is usually not worth this level of effort for very small, limited projects, but if the setup and cost were low (ie, most of it is setup time and cost) then everyone would want to use it on every project just as much as they use unit testing on PC targeted programs.

Adam Davis
A: 

@Jeff. I refined my original post. The answers here have been very interesting. To my mind it's not necessarily whether what you are doing is tied to the h/w or not it really depends on the complexity of the system being developed. I've worked on complex hardware integrations which involve 10-15 seperate device drivers, and multiple processors (ASSP/DSPs) involving complex power management schemes. On systems of this size it is definitely worth creating (or adapting if you're lucky) remote ROM image download and some sort of automated execution environment.

We use emulation to validate the functionality of more h/w agnostic components.

In these systems there are also some best practices you should try to get your h/w designers to adopt, specifically provide h/w with:

  1. the fastest simplest external debug comms possible
  2. a power control mechanism which can be remotely/automatically controlled
  3. anything which will facilitate automated rom download

Of course this very much depends on the type of environment you're working in.

tonylo
The best practices available and used are more interesting / challenging as we get to the smallest devices. It really changes the way we work.
JeffV
+5  A: 

We do some work with Atmel 8 bit AVR micros. We currently test all of the "business" logic with nunit and RhinoMocks including as part of a Cruise Control based continuous integration process.

  1. Create a C project that builds with WinAVR (and AVR Studio). This is partitioned into three logical components: The main driver, hardware compatibility layer and the common logic. The common logic or "business" logic is the bulk of the code and is designed to be tested.
  2. Create a C project in Visual Studio that includes the common component above and either a C++/CLI wrapper OR Win32 DLL. In the case of the C++/CLI wrapper, you get the managed component that can be tested with nunit. In the Win32 DLL -- you get something that can be P/Invoked from the tests.
  3. The common code should expose a function(s) that accepts function pointers that implement the hardware specific layers (essentially a IOC/DI mechanism).
  4. The native AVR driver (i.e., where main lives) initializes the common component to use the native hardware libraries (this includes things likes ports and other hardware specific things like "sleep").
  5. The nunit tests initialize the managed wrapper (C++/CLI or pinvokes to a DLL) with delegates that point to a managed implementation. This includes being able to use mocking tool like Rhino Mocks.

So we get unit tests, dependency injection, mocking, continuous integration all for an 8 bit micro.

Integration tests might use something the Phidgets products to drive the hardware as a black box. Importantly, if the unit tests and partitioning are done right this becomes a relatively small part of the development effort.

dpp