views:

270

answers:

1

I'm storing data in a varbinary(max) column and, for client performance reasons, chunking writes through the ".WRITE()" function using SQL Server 2005. This works great but, due to the side effects, I want to avoid the varbinary column dynamically sizing during each append.

What I'd like to do is optimize this by pre-allocating the varbinary column to the size I want. For example if I'm going to drop 2MB into the column I would like to 'allocate' the column first, then .WRITE the real data using offset/length parameters.

Is there anything in SQL that can help me here? Obviously I don't want to send a null byte array to the SQL server, as this would partially defeat the purpose of the .WRITE optimization.

+2  A: 

If you're using a (MAX) data type, then anything above 8K goes into row overflow storage, not the in-page storage. So you just need to put in enough data to get it up to the 8K for the row, making that take up the in-page allocation for the row, and the rest goes into row-overflow storage anyway. There's some more here.

If you want to pre-allocate everything, including the row overflow data, you can use something akin to (example does 10000 bytes):

SELECT CONVERT([varbinary](MAX), REPLICATE(CONVERT(varchar(MAX), '0'), 10000))
Matt Whitfield
This seems like it'll work well but I've never used REPLICATE with a large amount of data. Do you know how it performs when you're replicating to 1MB or more?
JD Conley
No, I haven't really done a lot of testing with it, to be honest, but it's the only way I can see to simply achieve what you need...
Matt Whitfield