views:

44

answers:

1

I have been seeing a few performance problems with a PHP script on a Linux Fedora Core 11 box, so I was running some commands to look for a bottleneck. One thing I noticed was that writing a file is pretty quick:

[root@localhost ~]# dd if=/dev/zero of=/root/myGfile bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 1.0817 s, 969 MB/s

But overwriting it takes much longer;

[root@localhost ~]# dd if=/dev/zero of=/root/myGfile bs=1024K count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 23.0658 s, 45.5 MB/s

Why is that? (I can repeat those results.)

A: 

The first time you write the file, it's buffered in system memory.

The second time you write the file, the file is truncated, which for some reason causes all of the dirty pages to get written out to disk. Yes, this seems stupid: why write out file data when that file just got truncated to length zero?

You can demonstrate this by making the second dd only write, say, 4k of data. It takes just as long.

You can also force dd to not truncate by using conv=notrunc.

Eric Seppanen