views:

28

answers:

1

I am using pg_buffercache module for finding hogs eating up my RAM cache. For example when I run this query:

SELECT c.relname, count(*) AS buffers
FROM pg_buffercache b INNER JOIN pg_class c
    ON b.relfilenode = c.relfilenode AND
       b.reldatabase IN (0, (SELECT oid FROM pg_database WHERE datname = current_database()))
GROUP BY c.relname
ORDER BY 2 DESC
LIMIT 10;

I discover that sample_table is using 120 buffers.

How much is 120 buffers in bytes?

+1  A: 

PostgreSQL has a hard coded block size of 8192 bytes -- see the pre-defined block_size variable. This used to be a number to hold in mind whenever you edited the config to specify shared_buffers, etc., but the config now supports suffixes like MB which will do the conversion for you.

It is possible, with hard work, to change block_size to other values. For a minority of applications there might be a more optimal size, but the number of places the code makes an assumption about the size is large.

Edmund
Thank you. Does shared_buffers = 2 GB mean I have 2*2^10^3/8192 = 262144 buffers?
Konrad Garus
Yup! This may of course lead to performance issues when running pg_buffer_cache, since it's so many. But that's a small price to pay if 2GB is the best cache size for your PG cluster.
Edmund