tags:

views:

47

answers:

3

Hi

What is the best way to delete a large, binary column from a SQL Server 2005 database yet minimise the size of the resulting .mdf and .ldf files?

Our app used to allow users to upload their documents. These were stored as BLOBs in a column in a documents table. Over time, this table grew to be >90% of the overall database size.

We have since changed things and no longer need to store documents like this. I'd now like to zero the data in this column and get the database back to a manageable size (disk space is at a bit of a premium). Being a legacy app, I'd like to maintain compatibility and not change the structure of the table.

The most obvious way is to do something like:

update documents set content = 0x0

But it seems to blow the .ldf out by a huge amount.

Addressing particular symptom, I could then either truncate the log (backup log mydb with truncate_only) or perhaps try a dbcc shrinkdatabase(mydb) or a dbcc shrinkfile(mydb_log, 20) but I've heard these can cause nasty fragmentation, etc. and might not be best in the long term.

Would it be better to create a second table with all but the content column, copy the data across and then truncate the first?

Any thoughts would be appreciated.

Cheers

A: 

Yes, your idea to create a second table from the first minus the content column then truncate the original table would probably be the most efficient.

Ron

Ron Savage
+2  A: 

try dropping the column:

ALTER TABLE foo
DROP COLUMN foo.name
northpole
and truncate the log
jle
A: 

From experimentation, there doesn't seem to be a magic bullet to this.

Both suggestions will allow me to delete the large content without a massive increase in .ldf size, however, the only way to reduce the size of the .mdf afterwards is something like dbcc shrinkdatabase which comes at the cost of page fragmentation.