views:

66

answers:

1

Hi there!

I'm trying to run benchmarks with a little berkeley db and c++ code, just for research purposes. Currently I'm testing with the following code. I'm filling the recors only with incrementing integers (I omitted things like error handling for better readability)

(db is my own berkeley db class)

// Open the database
db.open( NULL, db_file_name.c_str(), NULL, DB_BTREE, open_flags, 0 );

int key_content = 4000;
int data_content = 4000;

DWORD start = ::GetTickCount(); // start counter

while( i <= p_count )
{
    /*sprintf_s( rec_buf, "my_record_%d", i ); 
    std::string description = rec_buf;*/


    Dbt key( &key_content, sizeof(int) );
    Dbt data( &data_content, sizeof(int) );

    db.put( NULL, &key, &data, DB_NOOVERWRITE );
}

    DWORD end = ::GetTickCount(); // stop counter
    DWORD duration = end - start;
    std::cout << "Duration for "  << p_count << " records: " << duration << " ms" << std::endl;

So my question is if this is an efficient way of benchmarking because my time for 100.000 put-actions is about 1900 ms (~2secs). In a whitepaper published by berkeley the said that they had results of up to 90.700 put-actions per second... With my benchmarking I have about 50.000 put-actions per second.

Is there a better way to benchmark berkeley? Do you have any code examples for getting started with and testing berkeley?

Thanks in advance.

+2  A: 

Basically performance figures can be compared with another benchmarking results only when all the other variables are constant. These variables can be exhaustive i will try to put some of them

  1. The Processor.
  2. Memory
  3. Disk throughput
  4. Load on the system.
  5. Operating system.
  6. Compilation parameters.
  7. Tuning parameters of the respective system being benchmarked both OS and db.
  8. Data size.
aeh
As already mentioned above, I don't know why I haven't thought about that before. I will rethink my benchmark strategy and test on different systems. Thanks for the advice :)
Exa