views:

227

answers:

2

Hello,

I have two types of queries I run often on two large datasets. They run much slower than I would expect them to.

The first type is a sequential scan updating all records:

Update rcra_sites Set street = regexp_replace(street,'/','','i')

rcra_sites has 700,000 records. It takes 22 minutes from pgAdmin! I wrote a vb.net function that loops through each record and sends an update query for each record (yes, 700,000 update queries!) and it runs in less than half the time. Hmmm....

The second type is a simple update with a relation and then a sequential scan:

Update rcra_sites as sites 
Set violations='No' 
From narcra_monitoring as v 
Where sites.agencyid=v.agencyid and v.found_violation_flag='N'

narcra_monitoring has 1,700,000 records. This takes 8 minutes. The query planner refuses to use my indexes. The query runs much faster if I start with a set enable_seqscan = false;. I would prefer if the query planner would do its job.

I have appropriate indexes, I have vacuumed and analyzed. I optimized my shared_buffers and effective_cache_size best I know to use more memory since I have 4GB. My hardware is pretty darn good. I am running v8.4 on Windows 7.

Is PostgreSQL just this slow? Or am I still missing something?

Thanks! Brad

+1  A: 

Possibly try reducing your random_page_cost (default: 4) compared to seq_page_cost: this will reduce the planner's preference for seq scans by making random-accesses driven by indices more attractive.

Another thing to bear in mind is that MVCC means that updating a row is fairly expensive. In particular, updating every row in a table requires doubling the amount of storage for the table, until it can be vacuumed. So in your first query, you may want to qualify your update:

UPDATE rcra_sites Set street = regexp_replace(street,'/','','i')
                  where street ~ '/'

(afaik postgresql doesn't automatically suppress the update if it looks like you're not actually updating anything. Istr there was a standard trigger function added in 8.4 (?) to allow you to do that, but it's perhaps better to address it in the client side)

araqnid
These settings were commented out by default. I changed random_page_cost to 2.0 and seq_page_cost to 3.0. The query planner is now deciding to use both indexes. Nice!Yes, I should qualify the records to update more often, but in many cases I really am updating all rows. The time of the query is now down to 6 minutes. But that still seems too long to me.
Brad Mathews
setting seq_page_cost higher than random_page_cost feels wrong to me, but if it works for you....the slowness of updating most of the table is the cost of letting concurrent transactions be able to access the old versions of the rows while the update is in progress. other databases work different ways. such a big update could just lock up the entire table or exhaust undo/redo space in other systems, for example. sorry I can't think of much else to help...
araqnid
A: 

When a row is updated, a new row version is written.

If the new row does not fit in the same disk block, then every index entry pointing to the old row needs to be updated to point to the new row.

It is not just indexes on the updated data that need updating.

If you have a lot of indexes on rcra_sites, and only one or two frequently updated fields, then you might gain by separating the frequently updated fields into a table of their own.

You can also reduce the fillfactor percentage below its default of 100, so that some of the updates can result in new rows being written to the same block, resulting in the indexes pointing to that block not needing to be updated.

Stephen Denne