views:

1160

answers:

7

We have a rather specific application that uses PostgreSQL 8.3 as a storage backend (using Python and psycopg2). The operations we perform to the important tables are in the majority of cases inserts or updates (rarely deletes or selects).

For sanity reasons we have created our own Data Mapper-like layer that works reasonably well, but it has one big bottleneck, the update performance. Of course, I'm not expecting the update/replace scenario to be as speedy as the 'insert to an empty table' one, but it would be nice to get a bit closer.

Note that this system is free from concurrent updates

We always set all the fields of each rows on an update, which can be seen in the terminology where I use the word 'replace' in my tests. I've so far tried two approaches to our update problem:

  1. Create a replace() procedure that takes an array of rows to update:

    CREATE OR REPLACE FUNCTION replace_item(data item[]) RETURNS VOID AS $$
    BEGIN
        FOR i IN COALESCE(array_lower(data,1),0) .. COALESCE(array_upper(data,1),-1) LOOP
           UPDATE item SET a0=data[i].a0,a1=data[i].a1,a2=data[i].a2 WHERE key=data[i].key;
        END LOOP;
    END;
    $$ LANGUAGE plpgsql
    
  2. Create an insert_or_replace rule so that everything but the occasional delete becomes multi-row inserts

    CREATE RULE "insert_or_replace" AS
        ON INSERT TO "item"
        WHERE EXISTS(SELECT 1 FROM item WHERE key=NEW.key)
        DO INSTEAD
            (UPDATE item SET a0=NEW.a0,a1=NEW.a1,a2=NEW.a2 WHERE key=NEW.key);
    

These both speeds up the updates a fair bit, although the latter slows down inserts a bit:

Multi-row insert           : 50000 items inserted in  1.32 seconds averaging 37807.84 items/s
executemany() update       : 50000 items updated  in 26.67 seconds averaging  1874.57 items/s
update_andres              : 50000 items updated  in  3.84 seconds averaging 13028.51 items/s
update_merlin83 (i/d/i)    : 50000 items updated  in  1.29 seconds averaging 38780.46 items/s
update_merlin83 (i/u)      : 50000 items updated  in  1.24 seconds averaging 40313.28 items/s
replace_item() procedure   : 50000 items replaced in  3.10 seconds averaging 16151.42 items/s
Multi-row insert_or_replace: 50000 items inserted in  2.73 seconds averaging 18296.30 items/s
Multi-row insert_or_replace: 50000 items replaced in  2.02 seconds averaging 24729.94 items/s

Random notes about the test run:

  • All tests are run on the same computer as the database resides; connecting to localhost.
  • Inserts and updates are applied to the database in batches of of 500 items, each sent in its own transaction (UPDATED).
  • All update/replace tests used the same values as were already in the database.
  • All data was escaped using the psycopg2 adapt() function.
  • All tables are truncated and vacuumed before use (ADDED, in previous runs only truncation happened)
  • The table looks like this:

    CREATE TABLE item (
        key MACADDR PRIMARY KEY,
        a0 VARCHAR,
        a1 VARCHAR,
        a2 VARCHAR
    )
    

So, the real question is: How can I speed up update/replace operations a bit more? (I think these findings might be 'good enough', but I don't want to give up without tapping the SO crowd :)

Also anyones hints towards a more elegant replace_item(), or evidence that my tests are completely broken would be most welcome.

The test script is available here if you'd like to attempt to reproduce. Remember to check it first though...it WorksForMe, but...

You will need to edit the db.connect() line to suit your setup.

EDIT

Thanks to andres in #postgresql @ freenode I have another test with a single-query update; much like a multi-row insert (listed as update_andres above).

UPDATE item
SET a0=i.a0, a1=i.a1, a2=i.a2 
FROM (VALUES ('00:00:00:00:00:01', 'v0', 'v1', 'v2'), 
             ('00:00:00:00:00:02', 'v3', 'v4', 'v5'),
             ...
      ) AS i(key, a0, a1, a2)
WHERE item.key=i.key::macaddr

EDIT

Thanks to merlin83 in #postgresql @ freenode and jug/jwp below I have another test with an insert-to-temp/delete/insert approach (listed as "update_merlin83 (i/d/i)" above).

INSERT INTO temp_item (key, a0, a1, a2)
    VALUES (
        ('00:00:00:00:00:01', 'v0', 'v1', 'v2'),
        ('00:00:00:00:00:02', 'v3', 'v4', 'v5'),
        ...);

DELETE FROM item
USING temp_item
WHERE item.key=temp_item.key;

INSERT INTO item (key, a0, a1, a2)
    SELECT key, a0, a1, a2
    FROM temp_item;

My gut feeling is that these tests are not very representative to the performance in the real-world scenario, but I think the differences are great enough to give an indication of the most promising approaches for further investigation. The perftest.py script contains all updates as well for those of you who want to check it out. It's fairly ugly though, so don't forget your goggles :)

EDIT

andres in #postgresql @ freenode pointed out that I should test with an insert-to-temp/update variant (listed as "update_merlin83 (i/u)" above).

INSERT INTO temp_item (key, a0, a1, a2)
    VALUES (
        ('00:00:00:00:00:01', 'v0', 'v1', 'v2'),
        ('00:00:00:00:00:02', 'v3', 'v4', 'v5'),
        ...);

UPDATE item
SET a0=temp_item.a0, a1=temp_item.a1, a2=temp_item.a2
FROM temp_item
WHERE item.key=temp_item.key

EDIT

Probably final edit: I changed my script to match our load scenario better, and it seems the numbers hold even when scaling things up a bit and adding some randomness. If anyone gets very different numbers from some other scenario I'd be interested in knowing about it.

A: 

In your insert_or_replace. try this:

WHERE EXISTS(SELECT 1 FROM item WHERE key=NEW.key LIMIT 1)

instead of

WHERE EXISTS(SELECT 1 FROM item WHERE key=NEW.key)

As noted in comments, that will probably do nothing. All I have to add, then, is that you can always speed up INSERT/UPDATE performance by removing indexes. This will likely not be something you want to do unless you find your table is overindexed, but that should at least be checked out.

chaos
Most probably that's unnecessary - excerpt from the docs (http://www.postgresql.org/docs/current/static/functions-subquery.html#AEN15270): "The subquery will generally only be executed far enough to determine whether at least one row is returned, not all the way to completion."
Milen A. Radev
Ah, thanks. Didn't know how smart EXISTS was. Now I do. :)
chaos
The key is unique, so it will only ever return one row. Nevertheless, I tried, and there was no noticeable change in performance either way. Thanks though!
Henrik Gustafsson
A: 

In Oracle, locking the table would definitely help. You might want to try that with PostgreSQL, too.

ammoQ
I tried running all tests with the table locked in all transactions; no change.
Henrik Gustafsson
+1  A: 

I had a similar situation a few months ago and ended up getting the largest speed boost from a tuned chunk/transaction size. You may also want to check the log for a checkpoint warning during the test and tune appropriately.

BML
I will certainly look for the checkpoint warnings, it looks very relevant; thanks!
Henrik Gustafsson
+1  A: 

Sounds like you'd see benefits from using WAL (Write Ahead Logging) with a UPS to cache your updates between disk writes.

wal_buffers This setting decides the number of buffers WAL(Write ahead Log) can have. If your database has many write transactions, setting this value bit higher than default could result better usage of disk space. Experiment and decide. A good start would be around 32-64 corresponding to 256-512K memory.

http://www.varlena.com/GeneralBits/Tidbits/perf.html

SpliFF
+2  A: 

The usual way I do these things in pg is: load raw data matching target table into temp table (no constraints) using copy, merge(the fun part), profit.

I wrote a merge_by_key function specifically for these situations:

http://mbk.projects.postgresql.org/

The docs aren't terribly friendly, but I'd suggest giving it a good look.

jwp
The points of the general process being:load into temp to avoid any network round-trip costs and to avoid creating multiple cursors(portals) for each loaded row(yeah, it's fast with executemany, but COPY wtf-pwns-it wrt to efficiency).Use a merge function/process to avoid creating rules/trigger that alter the semantics of an insert statement. I've done it both ways, and I've always preferred the merge process because it's explicit.If the merge process is not efficient enough, you'll need to look at index suspension(recreation)/partitioning orhttp://pgfoundry.org/projects/pgbulkload/
jwp
A: 

I am working with a database of a million rows approx.. using python to parse documents and populate the table with terms.. The insert statements work fine but the update statements get extremely time consuming as the table size grows..

Is it normal for the update statement to take significantly longer than the insert statement? In my case updates are slowing the program many times

I think this should be a new question
Henrik Gustafsson
A: 

For updates, you can lower your fillfactor for the tables and the indexes and that might help

http://www.postgresql.org/docs/current/static/sql-createtable.html

http://www.postgresql.org/docs/current/static/sql-createindex.html

mikelikespie