views:

322

answers:

3

I am using the SqlBulkCopy class to do a bulk insert into a SQLServer DB.

Original size of the .mdf file associated with the DB is 1508 Mb.

When I run it (on the same data of about 4 million records) with :
BatchSize of 100000, the size of the .mdf grows to 1661 MB.
BatchSize of 1000000, size of the .mdf grows to 1659 MB.

Why this variation? Such a small variation is negligible allright, except that when my Tester runs it (on the same data) with a batch size of 100, the .mdf file grows insanely until it uses up all of the 20 gigs available to it, and then it errors out, due to lack of available space.

Is this because SqlBulkCopy has some fixed size blocks that it allocates?
Its working fine with BatchSizes > 100000, but I want to understand the root cause of this strange behaviour/bug.

A: 

This depends upon the growth settings of your database. The default is to grow by 10%, but your could be set to grow by 1GB when it fills etc.

ck
+1  A: 

Your mdf file is where your data is stored - it should grow roughly according to your data load. But as "ck" already pointed out - SQL Server allows you to specify a certain growth pattern, e.g. the mdf file doesn't grow byte by byte whenever you insert a row. So depending on your setting, it will take "jumps" in size, when it needs more space.

This has nothing to do with whether you load your data with a regular INSERT statement or SqlBulkLoad.

Marc

marc_s
A: 

Loading data by Bulk load or 'regular' load will have impact on the size of your log file and not on the MDF file (DB has to be in simple or bulk logged recovery mode, there are some other requirements) Are you sure the tester's log file didn't use up all the available space?

SQLMenace