I'm connecting to a MySQL (InnoDB) database over a connection with fairly high latency (~80ms) but relatively high bandwidth.
I've noticed that query times vary dramatically depending on how the query is issued. In the following examples I'm performing a query for a single, small row by primary key. The query times are:
- Command line client (
mysql
): ~160ms - Raw JDBC: ~240ms
- Hibernate: ~400ms (~0ms begin, ~160ms get, ~240ms commit)
- Hibernate, L2: ~240ms (~0ms begin, ~0ms get, ~240ms commit)
- Hibernate, c3p0: ~880ms (~160ms begin, ~240ms get, ~480ms commit)
- Hibernate, L2+c3p0: ~640ms (~160ms begin, ~0ms get, ~480ms commit)
("L2" means Hibernate second-level caching was enabled, "c3p0" means c3p0 was enabled, "begin", "get" and "commit" are timings for the various sub-methods invoked during the query)
These are, roughly, "steady state" results, so the L2 cache is hot, and Hibernate startup time is ignored. I'm assuming that "get" is typically 0ms when the L2 cache is enabled because no get is actually issued.
My questions are:
- Why are all of the queries such large multiples of the network latency? Even the
mysql
command-line client seems to require 2 round-trips for a simple query. - Why are all of the JDBC/Hibernate queries so much slower than the command-line client? Even the raw JDBC client seems to require 3 round-trips.
- Why does c3p0 seem to make everything worse? I have, as far as I can tell, disabled connection testing, which could otherwise explain things.