Nicholas,
Please not that, in Postgres, the default behaviour for temporary tables is that they are not automatically dropped, and data is persisted on commit. See ON COMMIT.
There are multiple considerations you have to take into account:
- Will the temporary table survive the session (and persist on disk)? If you do not want it to, then explicitly DROP the table just before COMMITing, or create the table with the CREATE TEMPORARY TABLE ... ON COMMIT DROP syntax.
- While the temporary table is in-use, how much of it will fit in memory before overflowing on to disk? See the temp_buffers option in postgresql.conf
- Anything else I should worry about when working often with temp tables? A vacuum is recommended after you have DROPped temporary tables, to clean up any dead tuples from the catalog. Postgres will automatically vacuum every 3 minutes or so for you when using the default settings (auto_vacuum.)
Also, unrelated to your question (but possibly related to your project): keep in mind that, if you have to run queries against a temp table after you have populated it, then it is a good idea to create appropriate indices and issue an ANALYZE on the temp table in question after you're done inserting into it. By default, the cost based optimizer will assume that a newly created the temp table has ~1000 rows and this may result in poor performance should the temp table actually contain millions of rows.
Cheers,
V.