I need to import a csv file that has 10's of thousands of rows into a Postgres database daily. I'm looking into the most efficient way to do that as each line in the csv file can be a new record or an existing one that should be updated if it's there. After many searches, I stumbled upon a solution, which I used:
CREATE OR REPLACE RULE insert_on_duplicate_update_advertiser_campaign_keywords_table AS
ON INSERT TO advertiser_campaign_keywords
WHERE (new.phrase, new.match_type, new.advertiser_campaign_id) IN (
SELECT phrase, match_type, advertiser_campaign_id
FROM advertiser_campaign_keywords
WHERE phrase = new.phrase AND match_type = new.match_type AND advertiser_campaign_id = new.advertiser_campaign_id AND state != 'deleted')
DO INSTEAD
UPDATE advertiser_campaign_keywords
SET bid_price_cpc = new.bid_price_cpc
WHERE phrase = new.phrase AND match_type = new.match_type AND advertiser_campaign_id = new.advertiser_campaign_id;
This is the closest I've come to a working solution, but it's not complete. It fails on inserts that look like this:
INSERT INTO advertiser_campaign_keywords (phrase, bid_price_cpc, match_type, advertiser_campaign_id) VALUES
('dollar', 1::text::money, 'Broad', 1450),
('two words', 1.2::text::money, 'Broad', 1450),
('two words', 1.0::text::money, 'Broad', 1450),
('three words exact', 2.5::text::money, 'Exact', 1450),
('four words broad match', 1.1::text::money, 'Exclusive', 1450),
('three words exact', 2.1::text::money, 'Exact', 1450);
The error message is:
duplicate key value violates unique constraint "unique_phrase_campaign_combo"
unique_phrase_campaign_combo looks like:
CONSTRAINT "unique_phrase_campaign_combo" UNIQUE ("phrase", "advertiser_campaign_id", "match_type", "deleted_at")
deleted_at is null unless the record is marked as deleted.
Anyone know how I can solve this problem?
Thanks