views:

25

answers:

1

I have loaded a huge CSV dataset -- Eclipse's Filtered Usage Data using PostgreSQL's COPY, and it's taking a huge amount of space because it's not normalized: three of the TEXT columns is much more efficiently refactored into separate tables, to be referenced from the main table with foreign key columns.

My question is: is it faster to refactor the database after loading all the data, or to create the intended tables with all the constraints, and then load the data? The former involves repeatedly scanning a huge table (close to 10^9 rows), while the latter would involve doing multiple queries per CSV row (e.g. has this action type been seen before? If not, add it to the actions table, get its ID, create a row in the main table with the correct action ID, etc.).

Right now each refactoring step is taking roughly a day or so, and the initial loading also takes about the same time.

A: 

From my experience you want to get all the data you care about into a staging table in the database and go from there, after that do as much set based logic as you can most likely via stored procedures. When you load into the staging table don't have any indexes on the table. Create the indexes after the data is loaded into the table.

Check this link out for some tips http://www.postgresql.org/docs/9.0/interactive/populate.html

StarShip3000
Thanks, StarShip. I've been perusing the pgsql documentation but have not come across that page yet.
hircus
I loaded the data with COPY, before any index and foreign key constraints were added, so there's no problem there. The slow steps I was referring to involves normalizing the table. I take it that there's no avoiding that... ah well.
hircus