Hello good people. In PostgreSQL I have a query like the following which will delete 250k rows from a 1m row table:
DELETE FROM table WHERE key = 'needle';
The query takes over an hour to execute and during that time, the affected rows are locked for writing. That is not good because it means that a lot of update queries have to wait for the big delete query to complete (and then they will fail because the rows disappeared from under them but that is ok). I need a way to segment this big query into multiple parts so that they will cause the least interference with the update queries as possible. For example, if the delete query could be split up into chunks each with 1000 rows in them then the other update queries would at most have to wait for a delete query involving 1000 rows.
DELETE FROM table WHERE key = 'needle' LIMIT 10000;
That query would work nicely, but alas it does not exist in postgres.