views:

490

answers:

7

I have a situation where I need to write some unit tests for some device drivers for embedded hardware. The code is quite old and big and unfortunately doesn't have many tests. Right now, the only kind of testing that's possible is to completely compile the OS, load it onto the device, use it in real life scenarios and say that 'it works'. There's no way to test individual components.

I came across an nice thread here which discusses unit testing for embedded devices from which I got a lot of information. I'd like to be a little more specific and ask if anyone has any 'best practices' for testing device drivers in such a scenario. I don't expect to be able to simulate any of the devices which the board in question is talking to and so will probably have to test them on actual hardware itself.

By doing this, I hope to be able to get unit test coverage data for the drivers and coax the developers to write tests to increase the coverage of their drivers.

One thing that occurs to me is to write embedded applications that run on the OS and exercise the driver code and then communicate the results back to the test harness. The device has a couple of interfaces which I can use to probably drive the application from my test PC so that I can exercise the code.

Any other suggestions or insights would be very much appreciated.


Update: While it may not be exact terminology, when I say unit testing, I meant being able to test/exercise code without having to compile the entire OS+drivers and load it onto the device. If I had to do that, I'd call it integration/system testing.

The problem is that the pieces of hardware we have are limited and they're often used by the developers while fixing bugs etc. To keep one dedicated and connected to the machine where the CI server and automated testing is done might be a no no at this stage. That's why I'm looking for ways to test the driver without having to actually build the whole thing and upload it onto the device.


Summary

Based on the excellent answers below, I think a reasonable way to approach the problem would be to expose driver functionality using IOCTLs and then write tests in the application space of the embedded device to actually exercise the driver code.

It would also make sense to have a small program residing in the application space on the device which exposes an API that can exercise the driver via serial or USB so that the meat of the unit test can be written on a PC which will communicate to the hardware and run the test.

If the project was just being started, I think we'd have more control over the way in which the components are isolated so that testing can be done mostly at the PC level. Given the fact that the coding is already done and we're trying to retrofit the test harness and cases onto the system, I think the above approach is more practical.

Thanks everyone for your answers.

+3  A: 

In the old days, that was how we tested and debugged device drivers. The very best way to debug such a system was for engineers to use the embedded system as a development system and—once adequate system maturity was reached— take away the original cross-development system!

For your situation, several approaches come to mind:

  • Add ioctl handlers: each code exercises a particular unit test
  • With conditional compilation, add a main() to the driver which conducts functional unit tests in the driver and outputs results to stdout.
  • For initial ease in debugging, maybe this could be made multi-platform operable so you don't have to debug on the target hardware.
  • Perhaps conditional code can also emulate a loopback-style device.
wallyk
If your embedded system was big enough that you could install a compiler on it, you weren't working in the old days.
Windows programmer
A: 

Vocabulary

I don't expect to be able to simulate any of the devices which the board in question is talking to and so will probably have to test them on actual hardware itself.

Then, you are stepping out of unit testing. Maybe you could use one of these expressions instead?

  • Automated testing : testing happen without user input (the contrary of Manual Testing).
  • Integration testing : testing several components together (the contrary of Unit testing).
    On a bigger scale, if you test a whole system and not just a few components together, it is called System testing.

ADDED after comments and updates in the question :

  • Component testing : like integration testing or System testing, but on an even smaller scale.
    Note : All three Component-Integration-System Testings share the same set of problems, on different scales. On the contrary, Unit Testing does not (see lower).


Advantages of "real" Unit Testing

With Integration- (or System- or Component-) Testing, it is certainly interesting to get some feedback, like test coverage. It is certainly useful to do.

But it is very hard (read "very costly") to make progress beyond some point, so
I suggest you use complementary approaches, like adding some real Unit Tests. Why? :

  • It is very hard to simulate the edge or error conditions. (Examples : the computer clock crosses a day or year during a transaction ; the network cable is unplugged ; power went down then up on some component, or the whole system ; the disk is full). Using Unit Testing, because you simulate these conditions rather than try to reproduce them, it is much easier. Unit Testing is your only chance to get a really good code coverage.
  • Integration testing takes time (because of access to external resources). You could execute thousands of unit test during the execution of one Integration Test. So testing many combinations are only possible with Unit Tests...
  • Requiring access to specific resources (hardware, Licence etc...), Integration Testing is often limited in time or scale. If the resources are shared by other projects, each project might use them only during a few hours per day. Even with exclusive access, maybe only one machine can use it, so you can't run tests in parallel. Or, your company may buy a resource (Licence or Hardware) for production, but not have it (or early enough) for development...
KLE
If he's testing a single driver in isolation, even though he has to test on the hardware, then maybe that *can be* called unit testing.
Craig McQueen
In a loose sense I suppose. I've updated the question with what I originally meant though.
Noufal Ibrahim
@Noufal Thanks for the update. I understand now. Is there something useful in the second part of my answer? Otherwise, I could just delete my answer, to remove some noise ;-)
KLE
Not particularly relevant to the exact question but since I'm in the initial stages of building my harness and convincing the development teams to change their style of working, comments like this are quite valuable.
Noufal Ibrahim
+1  A: 

I'd recommend for application-based testing. Even if the scaffolding can be hard and costly to build, there is a lot to gain here:

  • crash only once process as opposed to one system
  • ability to use standard tool set (debugger, memory checker ...)
  • overcome the hardware availability limitation
  • faster feedback: no installation in device, just compile and test
  • ...

As far as naming is concerned, this can be called component testing.

The application can either initialize the device driver the same way the target OS does, or use directly the interns of the driver. The former is more expensive but leads to more coverage. Then the linker will tell which functions are missing, stub them, possibly using exploding stubs.

philippe
Correct me if I'm wrong but these are advantages rather than a "way to do it".I see the advantages and I'm pushing for it myself but I need a way to make it work which is not prohibitively time consuming or expensive.
Noufal Ibrahim
+1  A: 

The code that really is dependent on the hardware (the lowest level of the driver stack in a layered architecture) can't really be tested anywhere except on the hardware, or a high-quality simulation of the hardware.

If your driver has some component of higher-level functionality that doesn't rely directly to the hardware (e.g., a protocol handler for sending messages to hardware in a particular format) and if that part is nicely self-contained in the code, then you could unit-test that separately in a PC-based unit-test framework.

Going back to the lowest level—if it's dependent on the hardware, then the test jig needs to include the hardware. You can make a test jig that includes the hardware, the driver, and some test software. The main thing, I think, is to get the normal product's application code out of the test, and put in some test code instead. The test code can systematically test all the driver's features and corner-cases (which the application code may not), and can also really hammer the driver intensively in a short amount of time (which the application probably doesn't). Thus it's more efficient use of your limited hardware than just running the application, and gives you better results.

If you can get a PC into the loop, then the PC might help with the testing. E.g. if you're writing a serial port driver for an embedded device, then you could:

  • Write test code for the embedded device that sends various known data streams.
  • Connect it to a PC's serial port, running test code that verifies the transmitted data streams.
  • Same in the other direction—PC sends data; embedded device receives it and verifies it, and notifies the PC of any errors.
  • The tests can stream data at full speed, and play with a range of different byte timings (I once found a microcontroller UART silicon bug that only appeared if bytes were sent with a ~5 ms delay between bytes).

You could do a similar thing with an Ethernet driver, a Wi-Fi driver.

If you're testing a storage device driver, such as for an EEPROM or Flash chip, then the PC couldn't get involved in the same way. In that case, your test harness could test all sorts of write conditions (single-byte, block...), and verify data integrity using all sorts of read conditions.

Craig McQueen
A: 

Summary

Based on the excellent answers below, I think a reasonable way to approach the problem would be to expose driver functionality using IOCTLs and then write tests in the application space of the embedded device to actually exercise the driver code.

It would also make sense to have a small program residing in the application space on the device which exposes an API that can exercise the driver via serial or USB so that the meat of the unit test can be written on a PC which will communicate to the hardware and run the test.

If the project was just being started, I think we'd have more control over the way in which the components are isolated so that testing can be done mostly at the PC level. Given the fact that the coding is already done and we're trying to retrofit the test harness and cases onto the system, I think the above approach is more practical.

Thanks everyone for your answers.

Noufal Ibrahim
A: 

I had a similar problem two or three years ago. I've ported a device driver from VxWorks to Integrity. We had changed only operating system dependent parts of the driver but it was a safety critical project, so all the unit tests, integration tests are redone. We have used a automated testing tool called LDRA testbed for our unit tests. 99% of our unit tests are done on Windows machines with Microsoft Compilers. Now I'll explain how to do this

Well first of all, when you are doing unit testing you are testing a software. When you include the real device in your tests, you are also testing the device. Sometimes there may be issues with hardware or documentation of the hardware. When you are designing the software, if you have described the behaviour of the each function clearly, it is very easy to make unit testing, for example, Think about the funtion;

readMessageTime(int messageNo, int* time); 
//This function calculates the message location, if the location is valid, 
//it reads    the time information 
address=calculateMessageAddr(messageNo); 
if(address!=NULL) { 
    read(address+TIME_OFFSET,time); 
    return success; 
} 
else { 
return failure; 
}

Well, here you are just testing if readMessageTime is doing what it is supposed to do. You do not have to test if calculateMessageAddr is calculating the right result or, read reads the right address. That is the responsibility of some other unit tests.. So what you have to do is write stubs for calculateMessageAddr and read(OS function) and check if it calls the functions with correct parameters. This is the case If you are not accessing the memory directly from your driver. You can test any kind of driver code without any OS or device with this mentality.

If you have mapped the device memory directly into your memory space and device driver reads and writes to device memory as it is its own memory, it gets a little bit complicated. Using automated testing tools, now you have to watch values of pointers and define pass/fall criterieas according to the values of these pointers. if you are reading a value from memory, you have to define the expected value. This may be hard in some cases.

There is also one more issue, developers always confuse in unit testing of drivers such like:

 readMessageTime(int messageNo, int* time); 
 //This function calculates the message location, if the location is valid,
 //it does some jobs to make the device ready to read then 
 //it reads the time information 
 address=calculateMessageAddr(messageNo); 
 if(address!=NULL) { 
      do_smoething(); // Get the device ready to read!    
      do_something_else() // do some other stuff so you can read the result in 3us.
      status=NOT_READY;
      while(status==NOT_READY) // mustn't be longer than 3us.
           status=read(address+TIME_OFFSET,time); 
      return success; 
  } else 
  { 
  return failure; 
  }

Here do_something and do_something_else does some jobs on device to make it ready to read. Developers always ask themselves "What if the device do not get ready forever and my code have a deadlock here" and they tend to test this kind of stuff on device.

Well, you have to trust the device manufacturer and the technical author. If they are saying that device will be ready in 1-2us, you do not need to worry about this. If your code fails here, you have to report it to device manufacturer, it is not your job to find a workaround to overhelm this problem. Did you see my point?

I hope this helps….

Caner Altinbasak
Thanks for your comments. I would have liked to go about like this but the hardware dependent and independent parts are too tightly coupled now to test them off the board.
Noufal Ibrahim
+1  A: 

Hi.

I had this exact task just two months ago.

Let me guess: You probably have "snippets" of code that speak low level details to the device. You know that these snippets work, but you can't get coverage on them because they have a dependency to the device drivers.

Likewise, it does not make sense to test every single line of it individually. They are never run in isolation, and your unit test would end up looking like a mirror reflection of the production code. For example, if you wish to start the device, you need to create a connection, pass it a specific low level reset command, then an initialize parameter struct etc etc. And if you need to add a piece of configuration, this may require you to take it off line, add the configuration and then take it online. Stuff like that.

You do NOT want to test low level stuff. Your unit tests would then only reflect how you assume that the device work without confirming anything.

The key here is to create three items: a controller, an abstraction and an adapter implementation of that abstraction. In Cpp, Java or C# you would create either a base class or an interface to represent this abstraction. I will assume that you created an interface. You break up the snippets into atomic operations. For example you create a method called "start" and "add(parameter)" in the interface. You put your snippets in the device adapter. The controller acts on the adapter through the interface.

Identify pieces of logic within the snippets that you have placed in the adapter. Then you need to decide wether this logic is low level (protocol handling details etc) or wether this is logic that should belong in the controller.

You can then test in two stages: * Have a simple test panel application that acts on the concrete adapter. This is used to confirm that the adapter actually works. That it starts when you press "start". That, for example, if you press "go offline", "transmit(192)" and "go online" in sequence, that the device responds as expected. This is your integration test.

You do not unit test the details in the adapter. You test it manually because the only success criteria is how the device responds.

However, the controller is completely unit tested. It only has a dependency to the abstraction, which is mocked out in your test code. Thus, your code has no dependency to your device driver because the concrete adapter is not involved.

Then you write unit tests to confirm that, for instance, the method "Add(1)" actually invokes "Go offline" then "Transmit(1)" and then "Go online" on the mocked out abstraction.

The challenge here is to draw the distinction between the adapter and the controller. What goes where? What worked for me was to create the aforementioned test panel first and then manipulate the device through it.

The adapater should hide the details you will only have to change if the device changes.

  1. If the control panel is cumbersome to operate with lots of sequences that needs to be repeated again and again, or that very device specific knowledge is required to operate the panel, then you have too high granularity and should bulk some of them together. The test panel should make sense.

  2. If end user requirements changing have impact on the adapter code, then you probably have too low granularity and should split the operations up, so that the requirements change can be accommodated with test driven development in the controller class.

Tormod