+3  A: 

When your statement (not even transaction) is completed, all your indexes are up-to-date. When you commit, all the changes become permanent, and all locks are released. Doing otherwise would not be "intelligence", it would violate the integrity and possibly cause errors.

Edit: by "integrity" I mean this: once committed, the data should be immediately available to anyone. If the indexes are not up-to-date at that moment, someone may get incorrect results.

As you are increasing batch size, your performance originally improves, then it will slow down. You need to run your own benchmarks and find out your optimal batch size. Similarly, you need to benchmark to determine whether it is faster to drop/recreate indexes or not.

Edit: if you insert/update/delete batches of rows in one statement, your indexes are modified once per statement. The following script demonstrates that:

CREATE TABLE dbo.Num(n INT NOT NULL PRIMARY KEY);
GO
INSERT INTO dbo.Num(n)
SELECT 0
UNION ALL
SELECT 1;
GO
-- 0 updates to 1, 1 updates to 0
UPDATE dbo.Num SET n = 1-n;
GO
-- doing it row by row would fail no matter how you do it
UPDATE dbo.Num SET n = 1-n WHERE n=0;
UPDATE dbo.Num SET n = 1-n WHERE n=1;
AlexKuznetsov
I would have thought data integrity didn't apply to an index. Is this only in the case of unique constraints?
Adam
If the index doesn't have integrity it's pretty much useless. I'm not sure what would happen if SQL used an index to find a record that SHOULD be there but that record was not. I'm guessing you would get false negatives, which pretty much defeats the purpose of ACID.
JNK
So the index is maintained on a by-row basis as the values are inserted / updated? Ie, they automatically land in the correct location based on any indexes and the indexes are amended where needed? So even if row inserts are batched together, there should be no discernible difference versus multiple smaller inserts? Come to think of it, I suppose there is overhead with committed index changes and table locking...
Adam
@Adam - I think you can negate some of the overhead with locking hints. I normally do batch updates because it is faster (if you do as Alex says and tweak batch size by experimenting) and because if you have a failure you don't lose the whole statement.
JNK
@Alex nice explanation on the per statement part, I suspected this was the case but never really considered that after your statement finishes, even though your script may not have finished, someone else could concurrently come in and access the data.
Adam