views:

205

answers:

9

Almost every application out there performs i/o operations, either with disk or over network.

As my applications work fine under the development-time environment, I want to be sure they will still do when the Internet connection is slow or unstable, or when the user attempts to read data from badly-written CD.

What tools would you recommend to simulate:

  • slow i/o (opening files, closing files, reading and writing, enumeration of directory items)
  • occasional i/o errors
  • occasional 'access denied' responses
  • packet loss in tcp/ip
  • etc...


EDIT:

Windows:
The closest solution to do the job as described seems to be holodeck, commercial software (>$900).

Linux:
Open solution wasn't found by now, but the same effect can be achived as specified by smcameron and krosenvold.


Decorator pattern is a good idea. It would require to wrap my i/o classes, but resulting in a testing framework. The only remaining untested code would be in 3rd party libraries.

Yet I decided not to go this way, but leave my code as it is and simulate i/o errors from outside.


I now know that what I need is called 'fault injection'. I thought it was a common production-line part with plenty of solutions I just didn't know. (By the way, another similar good idea is 'fuzz testing', thanks to Lennart)

On my mind, the problem is still not worth $900. I'm going to implement my own open-source tool based on hooks (targeting win32). I'll update this post when I'm done with it. Come back in 3 or 4 weeks or so...

+3  A: 

What you need is a fault injecting testing system. James Whittaker's 'How to break software' is a good read on this subject and includes a CD with many of the tools needed.

Shane MacLaughlin
A: 

You'll wanna setup a test lab for this. What type of application are you building anyway? Are you really expecting the application be fed corrupt data?

A test technique I know the Microsoft Exchange Server people tried was sending noise to the server. Basically feeding every possible input with seemingly random data. They managed to crash the server quite often this way.

But still, if you can't trust input that hasn't been signed then general rules apply. Track every operation which could potentially be untrusted (result of corrupt data) and you should be able to handle most problems gracefully.

Just test your application behavior on random input, that should catch most problems but you'll never be able to fully protect your self from corrupt data. That's just not possible, as the data could be part of some internal buffer being handed off within the application itself.

Be mindful of when and how you decode data. That is all.

John Leidegren
+1  A: 

If you're on linux you can do tons of magic with iptables;

iptables -I OUTPUT -p tcp --dport 7991 -j DROP

Can simulate connections up/down as well. There's lots of tutorials out there.

krosenvold
+1  A: 

Check out "Fuzz testing": http://en.wikipedia.org/wiki/Fuzzing

Lennart
A: 

The first thing you'll need to do is define what "correct" means under these circumstances. You can only test against a definition of what behaviour is intended.

The tactics of testing will depend on technology. In the context of automated unit testing, I have found it very useful, in OO languages such as Java, to use various flavors of "mocking" or "stubbing" to pass e.g. misbehaving InputStreams to parts of my code that used file I/O.

Morendil
+1  A: 

At a programming level many frameworks will let you wrap the IO stream classes and delegate calls to the wrapped instance. I'd do this and add in a couple of wait calls in the key methods (writing bytes, closing the stream, throwing IO exceptions, etc). You could write a few of these with different failure or issue type and use the decorator pattern to combine as needed.

This should give you quite a lot of flexibility with tweaking which operations would be slowed down, inserting "random" errors every so often etc.

The other advantage is that you could develop it in the same code as your software so maintenance wouldn't require any new skills.

BenM
A: 

Consider holodeck for some of the fault injection, if you have access to spare hardware you can simulate network impairment using Netem or a commercial product based on it the Mini-Maxwell, which is much more expensive than free but possibly easier to use.

+1  A: 

You don't say what OS, but if it's linux or unix-ish, you can wrap open(), read(), write(), or any library or system call etc, with an LD_PRELOAD-able library to inject faults.

Along these lines: http://scaryreasoner.wordpress.com/2007/11/17/using-ld_preload-libraries-and-glibc-backtrace-function-for-debugging/

smcameron
+1  A: 

I didn't go writing my own file system filter, as I initially thought, because there's a simpler solution.

1. Network i/o

I've found at least 2 ways to simulate i/o errors here.

a) Running a virtual machine (such as vmware) allows to configure bandwidth and packet loss rate. Vmware supports on-machine debugging.

b) Running a proxy on the local machine and tunneling all the traffic through it. For the case of upd/tcp communications a proxifier (e.g. widecap) can be used.

2. File i/o

I've managed to deduce this scenario to the previous one by mapping a drive letter to a network share which resides inside the virtual machine. The file i/o will be slow.

A cheaper alternative exists: to set up a local ftp server (e.g. FileZilla), configure speeds and use Novell's NetDrive to access it.

modosansreves