tags:

views:

409

answers:

8

All the projects I work interface to a piece of hardware and this is often the main purpose of the software. Are there any effective ways I can apply TDD to the code that works with the hardware?

Update: Sorry for not being clearer with my question.

The hardware I use is a frame grabber that capture images from a camera. I then process these images, display them and save them to disk. I can simulate all the processing that takes place after the images are captured by using previously captured images that are stored on disk.

But it's the actual interaction with the hardware that I want to test. For instance does my software cope correctly when there isn't a camera attached, does it properly start and stop grabbing etc. But this is so tied into the hardware I don't know how to test it when the hardware isn't present or if I should even be trying to do this?

2nd Update: I'm also looking for some concrete examples of exactly how people have dealt this situation.

A: 

If you have a simulator, you could write tests against the simulator and run these tests against the hardware.

It's hard to answer the questions with so little detail :-)

Wouter Lievens
Sorry about that, I was a bit vague.I've put some more info in the question, does that help at all?
Matt Warren
+3  A: 

Mock cleverly.

adolfojp
+1 When I was working on embedded devices this is the route we took.
Andrew Barrett
Do you have any example of how I do this? For instance how to I mock a what is basically a device driver, do I write my own simulator that produces the events/errors that could occur?
Matt Warren
+2  A: 

If you are writing software to manipulate data comming out of a specialized piece of hardware, then you could reasonably create stand-ins for the hardware to test the software.

If the hardware interface is something simple like a serial port, you could easily use a loop-back cable to have your program talk to the mock hardware. I used this approach some years ago when writing software to talk to a credit processor. My test app was given to think that that my simulator was a modem and a back-end processor.

If you are writing PCI device drivers or equivalent level software, then you probably can't create a software stand-in.

The only good way to apply TDD to such issues is if you are able to spoof the hardware's i/o with another program. For instance, I work with credit card handling for gas stations. On my current project we have a simulator that is the pump electronics hooked to some switches such that operation of a pump (lift handle, squeeze trigger, fuel flow) can be simulated. It's quite conceivable that we could have a simulator built that was controllable by software.

Alternately, depending on the device, you might be able to use standard test equipment (signal generators, etc) to feed it 'known inputs'.

Note that this has the problem that you are testing both the hardware and the device drivers together. Unfortunately, that's really the only good choice you have at this stage - Any simulated hardware is likely to be different enough from the real equipment that it's going to be useless to test with.

Michael Kohne
Thanks for the info, yeah I with the last part. I've struggled with TDD against hardware as mocking it seems pointless because you're not really testing the hardware.
Matt Warren
+2  A: 

It's probably not a good idea to include tests that access the hardware in your test suite. One of the problems with this approach is that the tests will only be able to run on a machine that is connected to this special piece of hardware, which makes it difficult to run the tests say as part of a (nightly) automatic build process.

One solution could be to write some software modules that behave like the missing hardware modules, at least from the interface point of view. When running your test suite, access these software modules instead of the real hardware.

I also like the idea of splitting the test suite into two parts:

  • one that accesses the real hardware, which you run manually
  • one that accesses the software modules, which runs as part of the automatic testing

In my experience, tests that involve real hardware almost always require some amount of manual interaction (e.g. plug something in and out to see if it's correctly detected), which makes it very hard to automate. The benefits are often just not worth the trouble.

geschema
Yeah good point, I won't be able to set-up the hardware on the build machine. But the part I struggle with is that if I mock the hardware too much what am I really testing? How do you deal with this?
Matt Warren
+4  A: 

Create a thin layer for controlling the hardware, and use system tests (manual or automatic) with the full hardware to make sure that the control layer works as expected. Then create a fake/mock implementation of the control layer, that behaves externally like the interface to the real hardware, and use it when doing TDD for the rest of the program.


Years ago, I was writing software for taking measurements with a SQUID magnetometer. The hardware was big, unmovable and expensive (video), so it was not possible to always have access to the hardware. We had documentation about the communication protocol with the devices (through serial ports), but the documentation was not 100% accurate.

What helped us very much was creating a software which listens to the data coming from one serial port, logs it and redirects it to another serial port. Then we were able to find out how the old program (which we were replacing) communicated with the hardware, and reverse engineer the protocol. They were chained like this: Old Program <-> Virtual Loopback Serial Port <-> Our Data Logger <-> Real Serial Port <-> Hardware.

Back then we did not use TDD. We did consider writing an emulator for the hardware, so that we could test the program in isolation, but since we did not know exactly how the hardware was supposed to work, it was hard to write an accurate emulator so in the end we did not do it. If we had known the hardware better, we could have created an emulator for it, and it would have made developing the program much easier. Testing with the real hardware was most valuable, and in hindsight we should have spent even more time testing with the hardware.

Esko Luontola
+1, it's similar to other answers, but very well put.
mghie
+2  A: 

Split your test suite into two parts:

  1. The first part runs tests against the real hardware. This part is used to build the mockups. By writing automatic tests for this, you can run them again if you have any doubts whether your mockups work correctly.

  2. The second part run against the mockups. This part runs automatically.

Part #1 gets run manually after you made sure the hardware is wired up correctly, etc. A good idea is to create a suite of tests which run against something returned by the factory and run these tests twice: Once with a factory that returns the real "driver" and once against the factory of your mock objects. This way, you can be sure that your mocks work exactly as the real thing:

class YourTests extends TestCase {
    public IDriver getDriver() { return new MockDriver (); }
    public boolean shouldRun () { return true; }
    public void testSomeMethod() throws Exception {
        if (!shouldRun()) return; // Allows to disable all tests
        assertEquals ("1", getDriver().someMethod());
    }
}

In my code, I usually use a system property (-Dmanual=yes) to toggle the manual tests:

class HardwareTests extends YourTests {
    public IDriver getDriver() { return new HardwareDriver (); }
    public boolean shouldRun () { return "yes".equals (System.getProperty("manual")); }
}
Aaron Digulla
+1  A: 

When I was working on set-top boxes we had a tool that would generate mocks from any C API with doxygen comments.

We'd then prime the mocks with what we wanted the hardware to return in order to unit-test our components.

So in the example above you'd set the result of FrameGrabber_API_IsDeviceAttached to be false, and when your code calls that function it returns false and you can test it.

How easy it will be to test depends on what your code is currently structured like.

The tool we used to generate the mocks was in house, so I can't help you with that. But there are some hopeful google hit: (disclaimer - I've used any of these, but hopefully they can be of help to you)

Just checking - do you have something like direct ioctl calls in your code or something? Those are always hard to mock up. We had a OS wrapper layer that we could easily write mocks for so it was pretty easy for us.

Andrew Barrett
+2  A: 

Refer to this Embedded TDD article.

philippe