views:

278

answers:

3

I have a table in PostgreSQL that I need read into memory. It is a very small table, with only three columns and 200 rows, and I just do a select col1, col2, col3 from my_table on the whole thing.

On the development machine this is very fast (less than 1ms), even though that machine is a VirtualBox inside of a Mac OS FileVault.

But on the production server it consistently takes 600 ms. The production server may have lower specs, and the database version is older, too (7.3.x), but that alone cannot explain the huge difference, I think.

In both cases, I am running explain analyze on the db server itself, so it cannot be the network overhead. The query execution plan is in both cases a simple sequential full table scan. There was also nothing else going on on the production machine at the time, so contention is out, too.

How can find out why this is so slow, and what can I do about it?

A: 

what does happen if you run the query several times ? the fisrt run should be slow, but the others should be faster because the 1st execution put the data in the cache.

BTW : if you do a SELECT ... FROM without any restriction, it's 100% normal that you have a seq scan, you have to seq scan to retrieve the value, and since you have do restrictions, there is no need to do an index scan.

Don't hesitate to post the result of your Explain Analyze query.

PostgreSQL 7.3 is really old, no option to upgrade to a more modern version ?

chburd
It does not get faster the second time around. I have no problem with the sequential scan, as that would be the ideal access path for reading the whole table. I just want that scan to take less time...
Thilo
+4  A: 

Sounds like perhaps you haven't been VACUUMing this database properly? 7.3 is way too old to have AutoVacuum, so this is something you must do manually (cron job is recommended). If you have had many updates (over time) to this table and not run VACUUM, it will be very slow to access.

Magnus Hagander
can I find out when VACUUM was last done?
Thilo
@Thilo: on 7.3 - you most likely can't find out.
depesz
While there's no really good way, one thing you can do is to compare the size of the table files on disk (check relfilenode in pg_class to find the filename) to the amount of data you *think* should be in the table. If that's way off, you need to run VACUUM FULL and REINDEX on the table. Oh, and as many others have said, upgrade to a supported version of Postgres.
Magnus Hagander
+2  A: 

It's clearly table bloat. Run vacuum analyze of the table in question. Also - upgrade. 7.3 is not even supported anymore.

depesz
VACUUM seems to be spot-on. I have to ask the operations guys to do that. Just to confirm that the table would shrink massively, how can see I how many blocks it takes up right now? I can then compare that with what I have on my development machine.
Thilo
It will *not* shrink. It might shrink some, but I wouldn't count really on it.
depesz
If it will not shrink what does VACUUM do?
Thilo
Reduce fragmentation.
Alex Brasetvik