views:

1233

answers:

4

I have recently started working on a very large C++ project that, after completing 90% of the implementation, has determined that they need to demonstrate 100% branch coverage during testing. The project is hosted on an embedded platform (Green Hills Integrity). I'm looking for suggestions and experiences from others on StackOverflow that have used code coverage products in similar environments. I'm interested in both positive and negative comments regarding these types of tools.

+4  A: 

100% branch coverage? That's quite the requirement, especially since some branches (defaults in case statements for state machines, for instance) should not be possible to run. I expect there are some exceptions, and if there aren't you might need to understand what coverage testing can and cannot accomplish before you start - otherwise you'll end up pulling your hair out, or worse - giving incorrect data.

Most coverage testing for embedded systems is actually performed on PCs. The code is ported, certain aspects of the microcontroller are emulated in software, and Bullseye or another similar PC code coverage utility is run. The reason this is done is that there are too many microcontrollers and compilers/debuggers/test environments to develop code coverage tools for each one.

When code coverage tools do exist for a specific embedded platform they aren't as powerful, configurable, easy to use, and bug free as those developed for the PC platform. The processors don't often have the trace capability (without high end emulation hardware) needed to perform good code coverage without inserting additional debug code into your firmware, which then has consequences and side effects that are difficult to control, especially with timing issues in real time systems.

Porting code over is not terribly difficult as long as you can abstract the hardware specific code (and since you're using C++ properly, that should be easy, right? ;-D ). The biggest issue you'll run into is types, which while better specified in C++ than they were in C still pose some issues. Make sure you're using a types.h or similar setup to specifically tell the compiler exactly what each type you use is and how it should be interpreted.

After that, you can go to town testing the core logic on the PC. You can even test the low level hardware drivers if you are interested in developing the software emulation required for that, although timing issues can be somewhat troublesome.

Software testing tools such as MxVDev perform a lot of the microcontroller emulation for you and help with timing issues as well, but you'll still have a bit of work even with such help.

If you must do this on the system itself, you'll need to purchase an emulator for the processor with coverage capability - not an inexpensive proposition (many emulators cost upwards of $30k for the full set of tools and emulation hardware), but it's one of the many tools used in high reliability environments such as the automotive and aerospace industries.

Disclaimer: I work for the company that produces MxVDev.

Adam Davis
Just a note that Bullseye doesn't seem to support Greenhills Integrity for anyone considering its use.
Gary
+1  A: 

As with Adam, we port our embedded code onto a PC based harness and do most of out coverage and profiling there. I have used AutomatedQA AQTime and Compuwares DevPartner, both of which are good products,

If you had to do coverage ob-board, you would need to use a coverage profiler that created an instrumented version of the source. There are both commercial and open source tools available to do this, but IMO, it adds a lot of work for not much gain.

100% coverage is ambitious, as you will need a lot of fault injection to get into all your error handlers and exception handlers. IMO, this would also be easier to do in a harness than onboard.

It is also worth pointing out to whoever has asked for 100% code coverage that 100% code coverage in no way equates to 100% test coverage. Consider for example the following function;

int div(int a, int b)
{
return (a/b);
}

100% code coverage only requires us to call this function once, 100% test coverage would require many more calls. My own test strategey involves developing automated testcases to give me an acceptable level of test coverage and using a code coverage tool purely as an aid to look for untested areas. To some extent it depends on your testing budget; for me 100% code coverage is way to expensive for what it delivers.

Shane MacLaughlin
+2  A: 

We have used Cantata and vectorcast in the past for Unit testing and code coverage. We also use the Greenhills tools and both of these tools work with the greenhills development tools. We run most of our test on the PPC simulator and just run test that rely on hardware on the Target hardware via a JTAG pod. Canatata and Vector cast are very similar with catata just slightly easier to use and have slightly more features but the small extras make a big difference in the user experience.

Generally if you want to achieve a high level of branch coverage you need to design your code for testability. The more you test the more you learn about writing testable code.

We also tried PC testing versus embedded testing gave us problems because of endianess but this is only a problem at the hardware layer.

In addition these tools are certified to RTCA/DO-178B standard.

Gerhard
+1 for cantata...we have to use it as well!
espais
A: 

See SD C++ Test Coverage. This is a family of (branch) test coverage tools for a variety of dialects of C++ (ANSI, GNU, MS...) that plays nicely even in actual embedded systems hardware by virtue of having a very small footprint, and having an easy way to export collected test coverage data. There's a GUI coverage display that isn't dependent on your actual embedded hardware, that will also produce a complete coverage report summary.

[I'm a principal in the company that provides these tools.]

Ira Baxter