tags:

views:

263

answers:

2

Why did it take 5 minutes to generate a 1 KiB file on my (low-end laptop) system with little load? And how could I generate a random binary file faster?

$ time dd if=/dev/random of=random-file bs=1 count=1024
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 303.266 s, 0.0 kB/s

real    5m3.282s
user    0m0.000s
sys 0m0.004s
$

Notice that dd if=/dev/random of=random-file bs=1024 count=1 doesn't work. It generates a random binary file of random length, on most runs under 50 B. Has anyone an explanation for this too?

+5  A: 

try urandom instead

$ time dd if=/dev/urandom of=random-file bs=1 count=1024

from: http://stupefydeveloper.blogspot.com/2007/12/random-vs-urandom.html """ The main difference between random and urandom is how they are pulling random data from kernel. random always takes data from entropy pool. If the pool is empty, random will block the operation until the pool would be filled enough. urandom will genarate data using SHA(or any other algorithm, MD5 sometimes) algorithm in the case kernel entropy pool is empty. urandom will never block the operation. """

Lance Rushing
+12  A: 

That's because on most systems /dev/random uses random data from the environment, such as static from peripheral devices. The pool of truly random data (entropy) which it uses is very limited. Until more data is available, output blocks.

Retry your test with /dev/urandom (notice the u), and you'll see a significant speedup.

See Wikipedia for more info. /dev/random does not always output truly random data, but clearly on your system it does.

Example with /dev/urandom:

$ time dd if=/dev/urandom of=/dev/null bs=1 count=1024
1024+0 records in
1024+0 records out
1024 bytes (1.0 kB) copied, 0.00675739 s, 152 kB/s

real    0m0.011s
user    0m0.000s
sys 0m0.012s
Stephan202