views:

230

answers:

2

I spent over an hour today puzzling myself over a query plan that I couldn't understand. The query was an UDPATE and it just wouldn't run AT ALL. Totally deadlocked: pg_locks showed it wasn't waiting for anything either. Now, I don't consider myself the best or worst query plan reader guy, but I find this one exceptionally difficult. I'm wondering how does one read these? Is there a methodology that the Pg aces follow in order to pinpoint the error?

I plan on asking another question as to how to work around this issue, but right now I'm speaking specifically on how to read these types of plans. Please do not point to any generic tutorial unless it specifically address this problem, highlighted below the query plan.

                                         QUERY PLAN                                         
--------------------------------------------------------------------------------------------
 Nested Loop Anti Join  (cost=47680.88..169413.12 rows=1 width=77)
   Join Filter: ((co.fkey_style = v.chrome_styleid) AND (co.name = o.name))
   ->  Nested Loop  (cost=5301.58..31738.10 rows=1 width=81)
         ->  Hash Join  (cost=5301.58..29722.32 rows=229 width=40)
               Hash Cond: ((io.lot_id = iv.lot_id) AND ((io.vin)::text = (iv.vin)::text))
               ->  Seq Scan on options io  (cost=0.00..20223.32 rows=23004 width=36)
                     Filter: (name IS NULL)
               ->  Hash  (cost=4547.33..4547.33 rows=36150 width=24)
                     ->  Seq Scan on vehicles iv  (cost=0.00..4547.33 rows=36150 width=24)
                           Filter: (date_sold IS NULL)
         ->  Index Scan using options_pkey on options co  (cost=0.00..8.79 rows=1 width=49)
               Index Cond: ((co.fkey_style = iv.chrome_styleid) AND (co.code = io.code))
   ->  Hash Join  (cost=42379.30..137424.09 rows=16729 width=26)
         Hash Cond: ((v.lot_id = o.lot_id) AND ((v.vin)::text = (o.vin)::text))
         ->  Seq Scan on vehicles v  (cost=0.00..4547.33 rows=65233 width=24)
         ->  Hash  (cost=20223.32..20223.32 rows=931332 width=44)
               ->  Seq Scan on options o  (cost=0.00..20223.32 rows=931332 width=44)
(17 rows)

The issue with this query plan - I believe I understand - is probably best said by RhodiumToad (he is definitely better at this, so I'll bet on his explanation being better) of irc://irc.freenode.net/#postgresql:

oh, that plan is potentially disasterous the problem with that plan is that it's running a hugely expensive hashjoin for each row the problem is the rows=1 estimate from the other join and the planner thinks it's ok to put a hugely expensive query in the inner path of a nestloop where the outer path is estimated to return only one row. since, obviously, by the planner's estimate the expensive part will only be run once but this has an obvious tendency to really mess up in practice the problem is that the planner believes its own estimates ideally, the planner needs to know the difference between "estimated to return 1 row" and "not possible to return more than 1 row" but it's not at all clear how to incorporate that into the existing code

He goes on to say:

it can affect any join, but usually joins against subqueries are the most likely

Now when I read this plan the first thing I noticed was the Nested Loop Anti Join, this had a cost of 169,413 (I'll stick to upper bounds). This Anti-Join breaks down to the result of a Nested Loop at cost of 31,738, and the result of a Hash Join at a cost of 137,424. Now, the 137,424, is much greater than 31,738 so I knew the problem was the Hash Join. Then I proceed to EXPLAIN ANALYZE the Hash Join segment outside of the query. It executed in 7 secs. I made sure there was indexes on (lot_id, vin), and (co.code, and v.code) -- there was. I disabled seq_scan and hashjoin individually and notice a speed increase of less than 2 seconds. Not near enough to account for why it wasn't progressing after an hour.

But, after all this I'm totally wrong! Yes, it was the slower part of the query, but because the rows="1" bit (I presume it was on the Nested Loop Anti Join). Is there a tutorial that will help me identify these types of issues. Here it is a bug (lack of ability) in the planner mis-estimating the amount of rows? How am I supposed to read into this to come to the same conclusion RhodiumToad did?

Is it simply rows="1" that is supposed to trigger me figuring this out?

I did run VACUUM FULL ANALYZE on all of the tables involved, and this is Postgresql 8.4.

A: 

Did you ANALYZE the tables? And what does pg_stats has to say about these tables? The queryplan is based on the stats, these have to be ok. And what version do you use? 8.4?

The costs can be calculated by using the stats, the amount of relpages, amount of rows and the settings in postgresql.conf for the Planner Cost Constants.

work_mem is also involved, it might be too low and force the planner to do a seqscan, to kill performance...

Frank Heikens
I did run `VACUUM FULL ANALYZE` on all of the tables involved, and this is Postgresql 8.4.
Evan Carroll
+2  A: 

Seeing through issues like this requires some experience on where things can go wrong. But to find issues in the query plans, try to validate the produced plan from inside out, check if the number of rows estimates are sane and cost estimates match spent time. Btw. the two cost estimates aren't lower and upper bounds, first is the estimated cost to produce the first row of output, the second number is the estimated total cost, see explain documentation for details, there is also some planner documentation available. It also helps to know how the different access methods work. As a starting point Wikipedia has information on nested loop, hash and merge joins.

In your example, you'd start with:

           ->  Seq Scan on options io  (cost=0.00..20223.32 rows=23004 width=36)
                 Filter: (name IS NULL)

Run EXPLAIN ANALYZE SELECT * FROM options WHERE name IS NULL; and see if the returned rows matches the estimate. A factor of 2 off isn't usually a problem, you're trying to spot order of magnitude differences.

Then see EXPLAIN ANALYZE SELECT * FROM vehicles WHERE date_sold IS NULL; returns expected amount of rows.

Then go up one level to the hash join:

     ->  Hash Join  (cost=5301.58..29722.32 rows=229 width=40)
           Hash Cond: ((io.lot_id = iv.lot_id) AND ((io.vin)::text = (iv.vin)::text))

See if EXPLAIN ANALYZE SELECT * FROM vehicles AS iv INNER JOIN options io ON (io.lot_id = iv.lot_id) AND ((io.vin)::text = (iv.vin)::text) WHERE iv.date_sold IS NULL AND io.name IS NULL; results in 229 rows.

Up one more level adds INNER JOIN options co ON (co.fkey_style = iv.chrome_styleid) AND (co.code = io.code) and is expected to return only one row. This is probably where the issue is because if the actual numebr of rows goes from 1 to 100, the total cost estimate of traversing the inner loop of the containing nested loop is off by a factor of 100.

The underlying mistake that the planner is making is probably that it expects that the two predicates for joining in co are independent of each other and multiplies their selectivities. While in reality they may be heavily correlated and the selectivity is closer to MIN(s1, s2) not s1*s2.

Ants Aasma
This is a great answer, but which one are you speaking of when you say `joining in co`? I believe `RhodiumToads` explanation of the problem because it sounds accurate? Are you explaining the same thing or something different?
Evan Carroll
Same thing. Joining in `co` is the `Index Scan using options_pkey on options co` inner node of the nested loop join. It has two conditions that the planner probably unrealistically thinks will result in one row of output. If you try to run that query and see how many rows it really returns you can verify whether this is the case. Bad estimates for correlated predicates are a known issue. Theres some discussion about this on the performance list: http://archives.postgresql.org/pgsql-performance/2009-06/msg00055.php
Ants Aasma