I have a table in PostgreSQL that I need read into memory. It is a very small table, with only three columns and 200 rows, and I just do a select col1, col2, col3 from my_table
on the whole thing.
On the development machine this is very fast (less than 1ms), even though that machine is a VirtualBox inside of a Mac OS FileVault.
But on the production server it consistently takes 600 ms. The production server may have lower specs, and the database version is older, too (7.3.x), but that alone cannot explain the huge difference, I think.
In both cases, I am running explain analyze
on the db server itself, so it cannot be the network overhead. The query execution plan is in both cases a simple sequential full table scan. There was also nothing else going on on the production machine at the time, so contention is out, too.
How can find out why this is so slow, and what can I do about it?