views:

1449

answers:

8

When I began, I used pg_dump with the default plain format. I was unenlightened.

Research revealed to me time and file size improvements with pg_dump -Fc | gzip -9 -c > dumpfile.gz. I was enlightened.

When it came time to create the database anew,

# create tablespace dbname location '/SAN/dbname';
# create database dbname tablespace dbname;
# alter database dbname set temp_tablespaces = dbname;

% gunzip dumpfile.gz              # to evaluate restore time without a piped uncompression
% pg_restore -d dbname dumpfile   # into a new, empty database defined above

I felt unenlightened: the restore took 12 hours to create the database that's only a fraction of what it will become:

# select pg_size_pretty(pg_database_size('dbname'));
47 GB

Because there are predictions this database will be a few terabytes, I need to look at improving performance now.

Please, enlighten me.

+1  A: 

As you may have guessed simply by the fact that compressing the backup results in faster performance, your backup is I/O bound. This should come as no surprise as backup is pretty much always going to be I/O bound. Compressing the data trades I/O load for CPU load, and since most CPUs are idle during monster data transfers, compression comes out as a net win.

So, to speed up backup/restore times, you need faster I/O. Beyond reorganizing the database to not be one huge single instance, that's pretty much all you can do.

Will Hartung
A: 

Isn't it possible to get pg_restore to take a stream?

If so, then just have the gunzip output piped to pg_restore

Earlz
+1  A: 

Two issues/ideas:

  1. By specifying -Fc, the pg_dump output is already compressed. The compression is not maximal, so you may find some space savings by using "gzip -9", but I would wager it's not enough to warrant the extra time (and I/O) used compressing and uncompressing the -Fc version of the backup.

  2. If you are using PostgreSQL 8.4.x you can potentially speed up the restore from a -Fc backup with the new pg_restore command-line option "-j n" where n=number of parallel connections to use for the restore. This will allow pg_restore to load more than one table's data or generate more than one index at the same time.

Matthew Wood
We are currently at 8.3; new reason to upgrade.
Joe
You can use the 8.4 version of pg_restore with an 8.3 version of the server. Just make sure you use pg_dump from 8.3.
Magnus Hagander
Bah. We are stuck at 8.3 because we use the Solaris10 package install of Postgres and, "there is no plan to integrate PG8.4 into S10 at this moment." [Ref. http://www.mail-archive.com/[email protected]/msg136829.html]I would have to take on the task of installing and maintaining the open-source postgres. Unsure if we can do that here...Feh.
Joe
+1  A: 

First check that you are getting reasonable IO performance from your disk setup. Then check that you PostgreSQL installation is appropriately tuned. In particular shared_buffers should be set correctly, maintenance_work_mem should be increased during the restore, full_page_writes should be off during the restore, wal_buffers should be increased to 16MB during the restore, checkpoint_segments should be increased to something like 16 during the restore, you shouldn't have any unreasonable logging on (like logging every statement executed), auto_vacuum should be disabled during the restore.

If you are on 8.4 also experiment with parallel restore, the --jobs option for pg_restore.

Ants Aasma
Hugely useful performance settings. I'll be looking into optimizing them for the future, thanks.
Joe
+3  A: 

You might want to check this blog post that I wrote some time ago.

depesz
Thanks muchly for the research.
Joe
+1 for nice article
Unreason
+1  A: 

I assume you need backup, not a major upgrade of database.

For backup of large databases you should setup continuous archiving instead of pg_dump.

  1. Set up WAL archiving.

  2. Make your base backups for example every day by using
    psql template1 -c "select pg_start_backup('`date +%F-%T`')"
    rsync -a --delete /var/lib/pgsql/data/ /var/backups/pgsql/base/
    psql template1 -c "select pg_stop_backup()"

A restore would be as simple as restoring database and WAL logs not older than pg_start_backup time from backup location and starting Postgres. And it will be much faster.

Tometzky
We didn't look at PITR (WAL archiving) because the system is not very transaction heavy but will retain many historical records instead. However, now that I think about it, a more "incremental" backup may help matters.I shall investigate. Thanks.
Joe
A: 

In addition to the other suggestions, don't forget to tune your configuration, including changes to *maintenance_work_mem* and *checkpoint_segments*.

See this page for performance hints for bulk inserting data into PostgreSQL.

hmallett
A: 
zcat dumpfile.gz | pg_restore -d db_name

Removes the full write of the uncompressed data to disk, which is currently your bottleneck.

Richo