views:

1073

answers:

2

Why are batch inserts faster? Is it because the connection and setup overhead for inserting a single row is the same for a set of rows? What other factors make batch inserts faster?

How do batch updates work? Assuming the table has no uniqueness constraints, insert statements don't really have any effect on other insert statements in the batch. However, during batch updates, an update can alter the state of the table and hence can affect the outcome of other update queries in the batch.

I know that batch insert queries have a syntax where you have all the insert values in one big query. How do batch update queries look like? For e.g. if i have single update queries of the form:

update <table> set <column>=<expression> where <condition1>
update <table> set <column>=<expression> where <condition2>
update <table> set <column>=<expression> where <condition3>
update <table> set <column>=<expression> where <condition4>

What happens when they are used in a batch. What will the single query look like ?

And are batch inserts & updates part of the SQL standard?

+8  A: 

Why are batch inserts faster?

For numerous reasons, but the major three are these:

  • The query doesn't need to be reparsed.
  • The values are transmitted in one round-trip to the server
  • The commands are inside a single transaction

Is it because the connection and setup overhead for inserting a single row is the same for a set of rows?

Partially yes, see above.

How do batch updates work?

This depends on RDBMS.

In Oracle you can transmit all values as a collection and use this collection as a table in a JOIN.

In PostgreSQL and MySQL, you can use the following syntax:

INSERT
INTO    mytable
VALUES 
        (value1),
        (value2),
        …

You can also prepare a query once and call it in some kind of a loop. Usually there are methods to do this in a client library.

Assuming the table has no uniqueness constraints, insert statements don't really have any effect on other insert statements in the batch. But, during batch updates, an update can alter the state of the table and hence can affect the outcome of other update queries in the batch.

Yes, and you may or may not benefit from this behavior.

I know that batch insert queries have a syntax where you have all the insert values in one big query. How do batch update queries look like?

In Oracle, you use collection in a join:

MERGE
INTO    mytable
USING   TABLE(:mycol)
ON      …
WHEN MATCHED THEN
UPDATE
SET     …

In PostgreSQL:

UPDATE  mytable
SET     s.s_start = 1
FROM    (
        VALUES
        (value1),
        (value2),
        …
        ) q
WHERE   …
Quassnoi
A: 

In a batch updates, the database works against a set of data, in a row by row update it has to run the same command as may times as there are rows. So if you insert a million rows in a batch, the command is sent and processed once and in a row-by row update, it is sent and processed a million times. This is also why you never want to use a cursor in SQL Server or a correlated subquery.

an example of a set-based update in SQL server:

update mytable
set myfield = 'test'
where myfield is null

This would update all 1 million records that are null in one step. A cursor update (which is how you would update a million rows in a non-batch fashion) would iterate through each row one a time and update it.

The problem with a batch insert is the size of the batch. If you try to update too many records at once, the database may lock the table for the duration of the process, locking all other users out. So you may need to do a loop that takes only part of the batch at a time (but pretty much any number greater than one row at time will be faster than one row at a time) This is slower than updating or inserting or deleting the whole batch, but faster than row-by row operations and may be needed in a production environment with many users and little available downtime when users are not trying to see and update other records in the same table. The size of the batch depends greatly on the database structure and exactly what is happening (tables with triggers and lots of constraints are slower as are tables with lots of fields and so require smaller batches).

HLGEM
The idea that large updates will lock the users out is only true either with bad databases or with bad application developers. SQL Server has provided the standard 4 transaction isolation levels since V7.0, you have to do something outright wrong to block anything by inserting data.
Greg Smith