views:

280

answers:

2

We have currently a Postgres database with a 100 of tables 20 of them with more than 5 000 000 rows, the master DB server runs on Debian 32MB RAM 8 Processors.

Additionaly to the master DB we have a Slave DB replicated using Slony.

Our application uses Java and Hibernate framework for SQL query, c3p0 as connection pool.

Our problem is that we expect currently high loads during peak times around 30 and around 4 during low traffic times. Currently we are not using Load balancing between master and slave for select statements.

Configuration of Postgres master DB is as follows:

shared_buffers = 6144MB
temp_buffers = 16MB
max_prepared_transactions = 20
work_mem = 128MB
max_fsm_pages = 409800

autovacuum is on.

c3p0 Hibernate connection pool configuration is:

 <property name="c3p0.min_size">3</property>
 <property name="c3p0.max_size">200</property>
 <property name="c3p0.timeout">300</property>
 <property name="c3p0.max_statements">1000</property>
 <property name="c3p0.idle_test_period">300</property>

One major problem we face is that select query are very complex, lots of join and even unions.

What would be a solution to tune, scale our actual system adn avoid high load ?

Upgrade hardware ? Load balancing between master and Slave ? Bad Configuration ?

Any suggest on better load balancing replication system than slony ?

Optimize SQL statements is not possible because we are not developing the software.

A: 

Unless you are using 2PC, your max_prepared_transactions should be 0.

work_mem is too high for 200 connections. You'll probably want to drop it to 32Mb or so. This can cause you swapping which will be disastrous for your performance.

That said, limit your connection pool to << 200 connections for best performance. Probably around 50 or so will give you the best performance.

As for the FSM, that depends completely on what your access pattern is. If you upgrade to 8.4, you will have that one autotuned, so that alone may be a reason to upgrade (there are many more of course).

Pretty hard to say much more than that without knowing a lot more about the system. You may want to go look for one of the PostgreSQL consulting companies to give you a complete performance review.

In general, with such a small database, it should be possible to get pretty good performance out of it if you set it up properly.

Magnus Hagander
+1  A: 

There's a basic intro to PostgreSQL parameters to adjust called Tuning Your PostgreSQL Server you should read. You're not touching two of the most important things that impact performance: effective_cache_size, which a bad setting for will screw up query planning, and checkpoint_segments which you have to bump up to get decent write speed from the database. If you have complicated queries, a look at default_statistics_target is in order too. You might also want to Log difficult queries and then Use Explain to find out why they're running slowly.

Greg Smith
Using the results of the explain, you may be able to define new indexes which will improve the performance of the sql statements without changing them.
crowne