I'm running a bunch of queries using Python and psycopg2. I create one large temporary table w/ about 2 million rows, then I get 1000 rows at a time from it by using cur.fetchmany(1000)
and run more extensive queries involving those rows. The extensive queries are self-sufficient, though - once they are done, I don't need their results anymore when I move on to the next 1000.
However, about 1000000 rows in, I got an exception from psycopg2:
psycopg2.OperationalError: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
Funnily enough, this happened when I was executing a query to drop some temporary tables that the more extensive queries created.
Why might this happen? Is there any way to avoid it? It was annoying that this happened halfway through, meaning I have to run it all again. What might max_locks_per_transaction
have to do with anything?
NOTE: I'm not doing any .commit()
s, but I'm deleting all the temporary tables I create, and I'm only touching the same 5 tables anyway for each "extensive" transaction, so I don't see how running out of table locks could be the problem...