views:

69

answers:

1

Hello Friends,

I was searching for a faster alternative to /dev/urandom when I stumbled across this interesting tidbit:

One good trick for generating very good non-random-but-nearly-random bits is to use /dev/random's entropy to seed a fast symmetric stream cipher (my favorite is blowfish), and redirect it's output to the application that needs it.

That's not a beginners technique, but it's easy to set up with a two or three line shell script and some creative pipes.

Further research yielded this comment from Schneier on Security:

If you are going to "inject entropy" there are a number of ways to do it but one of the better ways is to "spread" it across a high speed stream cipher and couple it with a non determanistic sampling system.

Correct me if I'm wrong, but it appears that this method of generating random bits is simply better than /dev/urandom in terms of speed and security.

So, here is my take on the actual code:

time dd if=/dev/zero bs=1M count=400 | openssl bf-ofb -pass pass:`cat /dev/urandom | tr -dc [:graph:] | head -c56` > /dev/null

This speed test takes 400MB of zeroes and encrypts it using blowfish with a 448 bit key made of pseudo-random, printable characters. Here's the output on my netbook:

400+0 records in 400+0 records out 419430400 bytes (419 MB) copied, 14.0068 s, 29.9 MB/s

real 0m14.025s user 0m12.909s sys 0m2.004s

That's great! But how random is it? Lets pipe the results to ent:

Entropy = 8.000000 bits per byte.

Optimum compression would reduce the size of this 419430416 byte file by 0 percent.

Chi square distribution for 419430416 samples is 250.92, and randomly would exceed this value 50.00 percent of the times.

Arithmetic mean value of data bytes is 127.5091 (127.5 = random). Monte Carlo value for Pi is 3.141204882 (error 0.01 percent). Serial correlation coefficient is -0.000005 (totally uncorrelated = 0.0).

It looks good. However, my code has some obvious flaws:

  1. It uses /dev/urandom for the initial entropy source.
  2. Key strength is not equivalent to 448 bits because only printable characters are used.
  3. The cipher should be periodically re-seeded to "spread" out the entropy.

So, I was wondering if I am on the right track. And if anyone knows how to fix any of these flaws that would be great. Also, could you please share what you use to securely wipe disks if it's anything other than /dev/urandom, sfill, badblocks, or DBAN?

Thank you!

Edit: Updated code to use blowfish as a stream cipher.

A: 

If you're simply seeking to erase disks securely, you really don't have to worry that much about the randomness of the data you write. The important thing is to write to everything you possibly can - maybe a couple of times. Anything much more than that is overkill unless your 'opponent' is a large government organization with the resources to spare to indulge in the data recovery (and it is not clear cut that they can read it even so - not these days with the disk densities now used). I've used the GNU 'shred' program - but I'm only casually concerned about it. When I did that, I formatted a disk system onto the disk drive, then filled it with a single file containing quasi-random data, then shredded that. I think it was mostly overkill.

Maybe you should read Schneier's 'Cryptography Engineering' book?

Jonathan Leffler
I agree that using random data to securely erase disks is overkill. One could simply do one pass of `/dev/zero` and be pretty safe from attackers who aren't multimillionaires. However, if I bought a 2TB HDD and wanted to encrypt it I must first fill it with random data.Thanks for the book, I'll look into it.