I'm getting some strange performance results here and I'm hoping someone on stackoverflow.com can shed some light on this!
My goal was a program that I could use to test whether large seek's were more expensive than small seek's...
First, I created two files by dd'ing /dev/zero to seperate files... One is 1 mb, the other is 9.8gb... Then I wrote this code:
#define _LARGE_FILE_API
#define _FILE_OFFSET_BITS 64
#include <stdio.h>
#include <stdlib.h>
#include <sys/stat.h>
#include <sys/types.h>
#include <unistd.h>
int main( int argc, char* argv[] )
{
struct stat64 fileInfo;
stat64( argv[1], &fileInfo );
FILE* inFile = fopen( argv[1], "r" );
for( int i = 0; i < 1000000; i++ )
{
double seekFrac = ((double)(random() % 100)) / ((double)100);
unsigned long long seekOffset = (unsigned long long)(seekFrac * fileInfo.st_size);
fseeko( inFile, seekOffset, SEEK_SET );
}
fclose( inFile );
}
Basically, this code does one million random seeks across the whole range of the file. When I run this under time, I get results like this for smallfile:
[developer@stinger ~]# time ./seeker ./smallfile
real 0m1.863s
user 0m0.504s
sys 0m1.358s
When I run it against the 9.8 gig file, I get results like this:
[developer@stinger ~]# time ./seeker ./bigfile
real 0m0.670s
user 0m0.337s
sys 0m0.333s
I ran against each file a couple dozen times and the results are consistent. Seeking in the large file is more than twice as fast as seeking in the small file. Why?