views:

90

answers:

2

I have a temporary table with about 1 million entries. The temporary table stores the result of a larger query. I want to process these records 1000 at a time, for example. What's the best way to set up queries such that I get the first 1000 rows, then the next 1000, etc.? They are not inherently ordered, but the temporary table just has one column with an ID, so I can order it if necessary. I was thinking of creating an extra column with the temporary table to number all the rows, something like:

CREATE TEMP TABLE tmptmp AS
SELECT ##autonumber somehow##, id
FROM .... --complicated query

then I can do:

SELECT * FROM tmptmp WHERE autonumber>=0 AND autonumber < 1000

etc... how would I actually accomplish this? Or is there a better way? I'm using Python and PostgreSQL.

+3  A: 

Use a cursor and fetch the to rows you need. Offset ... limit will become slow when you have a lot of records, a cursor will do a much better job.

http://www.postgresql.org/docs/8.4/interactive/sql-fetch.html

Frank Heikens
of course. and from Python I just have to do `cur.fetchmany(1000)` instead of `cur.fetchall()` heh.
Claudiu
+1 Yes, that would of course be the better solution (will be deleting mine shortly). In a `dbapi2` compliant database interface in Python you'd indeed be using the std `.execute(sql)` followed by a series of `.fetchmany(1000)` until the cursor is fully consumed.
ChristopheD
A: 

Perhaps you could use something like this (we use when batch updating a table with +20 million rows and don't want to hog the replication).

import sys
import psycopg2
from datetime import datetime

firstid = 0
splitsize = 50 # Size of each batch


# Complicated query
query_complex = """
    CREATE TEMP TABLE tmptmp AS
    SELECT * FROM schema.massive_table
"""
# Query to be run at intervals
query = """
    SELECT * FROM tmptmp WHERE id BETWEEN %(startid)s AND %(endid)s
"""

conn = psycopg2.connect("dbname=database_name user=postgres")
curs = conn.cursor()
# Run complicated query
curs.execute(query_complex)
# Get highest id
curs.execute("SELECT max(id) FROM tmptmp")
maxid = curs.fetchall()[0][0]
print "Max id: %s" % maxid

for startid in range(firstid, maxid, splitsize):
    endid = startid + splitsize - 1
    print "%s: Running query on range %s to %s" % (datetime.now(), startid, endid)
    curs.execute(query, {'startid':startid, 'endid':endid})
    print "%s: Affected rows: %s. Total completed: %s%%" % (datetime.now(), curs.rowcount, round((endid * 100) / maxid, 3))

print "Done."

The output that follows:

Max id: 308
2010-06-18 11:59:11.271000: Running query on range 0 to 49
2010-06-18 11:59:11.271000: Affected rows: 49. Total completed: 15.0%
2010-06-18 11:59:11.271000: Running query on range 50 to 99
2010-06-18 11:59:11.271000: Affected rows: 50. Total completed: 32.0%
2010-06-18 11:59:11.271000: Running query on range 100 to 149
2010-06-18 11:59:11.271000: Affected rows: 50. Total completed: 48.0%
2010-06-18 11:59:11.271000: Running query on range 150 to 199
2010-06-18 11:59:11.271000: Affected rows: 49. Total completed: 64.0%
2010-06-18 11:59:11.271000: Running query on range 200 to 249
2010-06-18 11:59:11.271000: Affected rows: 42. Total completed: 80.0%
2010-06-18 11:59:11.271000: Running query on range 250 to 299
2010-06-18 11:59:11.318000: Affected rows: 3. Total completed: 97.0%
2010-06-18 11:59:11.318000: Running query on range 300 to 349
2010-06-18 11:59:11.318000: Affected rows: 1. Total completed: 113.0%
Done.

// John

John P