I'm trying to understand and if possible tune my read performance from our direct-attached storage.
Host :
centos 5.4
2 * Intel Xeon E5520
24 Gib of memory
gpfs filesystem
I/O Scheduler is set to deadline, cfq doesn't seem to improve anything.
Storage :
60 2Tb drives
hardware raid 60 (6 * 10 drives)
8GB fiber channel connexion
Test is performed using a dd from a 16Gib file (a different file for each instance) to /dev/null with 64k block size.
1 read : 730 Mib/sec
2 read : 290 Mib/sec
3 read : 232 Mib/sec
On the other side a write from /dev/zero to the storage with a 64kb performs really well.
1 write : 370 Mib/sec
2 write : 377 Mib/sec
3 write : 409 Mib/sec
I admit 2 read should be slower than 1 read but loosing half of the speed between one and two read seems like really a lot or am I wrong ?