views:

69

answers:

1

I think with SQLite3, at least it doesn't keep any cached page because there is no server and each write will exit SQLite3, so it can't do any caching directly.

But when it is MySQL, Postgresql, or MongoDB, there will be a layer which, when the data is thought to be saved already, it is actually in the memory cache of the DBMS... to be written later to the disk.

And even when it is written to the disk, there is an OS layer that keeps sectors that are to be written to the disk.

And then there is the hard drive's cache. With it being 8MB, so maybe when the test is inserting data creating a 800MB database, then the error can be 1% or less.

But what about the other layers? There really needs to be flushing all the way to the OS layer. Otherwise, with computers having 4GB of RAM or 8GB of RAM, the whole database can easily reside in RAM when it is thought to be quite fast. How do we tell the test to flush the data all the way to the hard disk physical layer or at least out of the OS layer?

+1  A: 

When you are doing benchmarking, you can never negate all the speed optimizations at every layer down to the OS or even CPU level, including caching. You don't need that. What you can do is to benchmark performance in the different lifecycle states of your system. Also, if you know what data is cached when (approximately) you can do benchmark before and after that. For example - clean start, first DB dataset access, subsequent DB access, etc. It is best to establish the bottlenecks first and then benchmark only there in greater details. Another good practise is to simulate a real life system load and benchmark that. Any syntetic benchmarks are practically pointless. Good luck ;)

avok00