Hi there!
I'm trying to run benchmarks with a little berkeley db and c++ code, just for research purposes. Currently I'm testing with the following code. I'm filling the recors only with incrementing integers (I omitted things like error handling for better readability)
(db is my own berkeley db class)
// Open the database
db.open( NULL, db_file_name.c_str(), NULL, DB_BTREE, open_flags, 0 );
int key_content = 4000;
int data_content = 4000;
DWORD start = ::GetTickCount(); // start counter
while( i <= p_count )
{
/*sprintf_s( rec_buf, "my_record_%d", i );
std::string description = rec_buf;*/
Dbt key( &key_content, sizeof(int) );
Dbt data( &data_content, sizeof(int) );
db.put( NULL, &key, &data, DB_NOOVERWRITE );
}
DWORD end = ::GetTickCount(); // stop counter
DWORD duration = end - start;
std::cout << "Duration for " << p_count << " records: " << duration << " ms" << std::endl;
So my question is if this is an efficient way of benchmarking because my time for 100.000 put-actions is about 1900 ms (~2secs). In a whitepaper published by berkeley the said that they had results of up to 90.700 put-actions per second... With my benchmarking I have about 50.000 put-actions per second.
Is there a better way to benchmark berkeley? Do you have any code examples for getting started with and testing berkeley?
Thanks in advance.