views:

1113

answers:

8

Has anyone had success automating testing directly on embedded hardware?

Specifically, I am thinking of automating a battery of unit tests for hardware layer modules. We need to have greater confidence in our hardware layer code. A lot of our projects use interrupt driven timers, ADCs, serial io, serial SPI devices (flash memory) etc..

Is this even worth the effort?

We typically target:

Processor: 8 or 16 bit microcontrollers (some DSP stuff)
Language: C (sometimes c++).

+11  A: 

Sure. In the automotive industry we use $100,000 custom built testers for each new product to verify the hardware and software are operating correctly.

The developers, however, also build a cheaper (sub $1,000) tester that includes a bunch of USB I/O, A/D, PWM in/out, etc and either use scripting on the workstation, or purpose built HIL/SIL test software such as MxVDev.

Hardware in the Loop (HIL) testing is probably what you mean, and it simply involves some USB hardware I/O connected to the I/O of your device, with software on the computer running tests against it.

Whether it's worth it depends.

In the high reliability industry (airplane, automotive, etc) the customer specifies very extensive hardware testing, so you have to have it just to get the bid.

In the consumer industry, with non complex projects it's usually not worth it.

With any project where there's more than a few programmers involved, though, it's really nice to have a nightly regression test run on the hardware - it's hard to correctly simulate the hardware to the degree needed to satisfy yourself that the software testing is enough.

The testing then shows immediately when a problem has entered the build.

Generally you perform both black box and white box testing - you have diagnostic code running on the device that allows you to spy on signals and memory in the hardware (which might just be a debugger, or might be code you wrote that reacts to messages on a bus, for instance). This would be white box testing where you can see what's happening internally (and even cause some things to happen, such as critical memory errors which can't be tested without introducing the error yourself).

We also run a bunch of 'black box' tests where the diagnostic path is ignored and only the I/O is stimulated/read.

For a much cheaper setup, you can get $100 microcontroller boards with USB and/or ethernet (such as the Atmel UC3 family) which you can connect to your device and run basic testing.

It's especially useful for product maintenance - when the project is done, store a few working boards, the tester, and a complete set of software on CD. When you need to make a modification or debug a problem, it's easy to set it all back up and work on it with some knowledge (after testing) that the major functionality was not affected by your changes.

Adam Davis
+1  A: 

Unit testing embedded projects is quite diffucult, as it usually requires a external stimulus and external measurment.

We have been successful in developing a external serial protocol (either rs232 or udp or tcpip messages) with basic commands for exercising the hw with debug logging in the low level drivers looking for erroneous conditions or even slightly abnormal conditions(espcially for limit checking)

But once developed we then can run the testing after every build if required. It will definitly allow you to deliver a better quality product.

simon
+10  A: 

Yes. I have had success, but it is not a stragiht-forward problem to solve. In a nutshell here is what my team did:

  1. Defined a variety of unit tests using a home-built C unit-testing framework. Basically, just a lot of macros, most of which were named TEST_EQUAL, TEST_BITSET, TEST_BITVLR, etc.

  2. Wrote a boot code generator that took these compiled tests and orchestrated them into an execution environment. It's just a small driver that executes our normal startup routine - but instead of going into the control loop, it executes a test suite. When done, it stores the last suite to run in flash memory, then it resets the CPU. It will then run then next suite. This is to provide isolation incase a suite dies. (However, you may want to disable this to make sure your modules cooperate. But that's an integration test, not a unit test.)

  3. Individual tests would log their output using the serial port. This was OK for our design because the serial port was free. You will have to find a way to store your results if all your IO is consumed.

It worked! And it was great to have. Using our custom datalogger, you could hit the "Test" button, and a couple minutes later, you would have all the results. I highly recommend it.

Updated to clarify how the test driver works.

Frank Krueger
So the code as tested on the hardware, but the software running on the hardware during the testing was different than what is delivered? (ie, you added code to the firmware to do the test that isn't present in the final release)
Adam Davis
A: 

If your goal is manufacturing test (ensuring that the modules are properly assembled, no inadvertent shorts/opens/etc), you should focus first on testing cables and connectors, followed by socketed and soldered connections, then the PCB itself. These items can all be tested for shorts & opens by finding access patterns that drive each individual line high while its neighbors are low and vice-versa, then reading back the lines' values.

Without knowing more details of your hardware it's difficult to be more specific, but most embedded processors can set I/O pins to a GPIO mode that simplifies this sort of testing.

If you are not performing bed-of-nails testing on your PCAs, this testing should be considered a mandatory first step for newly manufactured boards.

Frank Szczerba
+2  A: 

If your goal is to test your low-level driver code you will likely need to create some sort of test fixture, using loopback cables or multiple interconnected units to allow you to exercise each driver. Pairing a board with known-good software with a board running a development build will allow you to test for regressions in communication protocols, etc.

Specific test strategies depend on the hardware you wish to test. For example, ADCs can be tested by presenting a known waveform and converting a series of samples, then checking for the proper range, frequency, average value, etc.

I have found this type of testing to be very valuable in the past, allowing me to confidently modify and improve driver code without fear of breaking existing applications.

Frank Szczerba
+2  A: 

Yes, I do this, although I've always had a serial port available for test I/O.

It is frequently difficult to leave the unit totally unmodified. Some tests require a line commented out or a call added e.g. to deal with a watchdog.

IMHO, this is better than no unit testing at all. And of course you need to be doing complete integration/system testing, too.

Steve Fallows
+4  A: 

Yes.

The difficulty depends on the type of hardware that you're trying to test. As others have said earlier the issue is going to be the complexity of the external stimulus that you need to apply. External stimulus is probably best achieved with some external test rig (as Adam Davis has described).

One thing to consider, though, is exactly what it is that you're trying to verify.

It's tempting to assume that to verify the interaction of the hardware and the firmware then you've really no option but to directly apply external stimulus (ie. applying DACs to all of your ADC inputs, etc.). In these cases, though, the corner cases that you really want to test are often going to be subject to issues of timing (eg. interrupts arriving when you're executing function foo()) which are going to be incredibly difficult to test in a meaningful way - and even harder to get meaningful results from. (ie. The first 100K times we ran this test it was fine. The last time we ran it it failed. Why?!?)

But the verification of the hardware should be done separately. Once this is done, unless it's changing regularly (through downloadable fpga images or the like) then you should be able to assume that the hardware works and purely test your firmware.

So in this case you can concentrate on verifying the algorithms that are used for processing your external stimulii. For example, calling your ADC conversion routines with a fixed value as if they came from your ADC directly. These tests are repeatable and therefore of benefit. They will require special test builds though.

Testing the communications paths of your device is going to be relatively straightforward and shouldn't require special code builds.

Andrew Edgecombe
+2  A: 

We have had good results with automated testing on our embedded systems. We have test written in high level (easy to program and debug) languages that run on dedicated test machines. These test generally do sanity checking or generate random inputs into the devices, then check for correct behavior. There is a lot of work to generate and maintain these tests. We designed a framework and then let interns work on the tests themselves.

It's not a perfect solution, and the tests are certainly prone to errors, but the most important part is to improve on your existing coverage holes. Find the biggest hole and design something to cover it in an automated fashion, even if it isn't perfect or won't cover the entire feature. Later when all of your stuff is covered somewhat, you can come back and address the worst coverage or the most critical features.

Some things to consider:

  • What is the penalty of a firmware bug? How easier is it to update firmware in the field.
  • What kind of coverage do my test provide? Is it a simple sanity check? Is it configurable enough that it can test many different scenarios?
  • Once a test has failed, how will you reproduce that value in order to debug it? Did you log all the device and test settings so you can eliminate as many variables as possible? Device configuration, firmware version, test software version, all external inputs, all observed behavior?
  • What are you testing against? Is the spec clear enough on what the expected behavior of the device you are testing or are you validating against what you think the code should do?
Ben Gartner