views:

7379

answers:

10

I have a Database nearly 1.9Gb Database in size MSDE2000 does not allow DBs that exceed 2.0Gb

I need to shrink this DB (and many like it at various client locations)

I have found and deleted many 100's of 1000's of records which are considered unneeded. these records account for a large percentage of some of the main (largest) tables in the Database. Therefore it's reasonable to assume much space should now be retrievable.

So now I need to shrink the DB to account for the missing records

I execute "DBCC ShrinkDatabase('MyDB')"......No effect.

I have tried the various shrink facilities provided in MSSMS.... Still no effect.

I have backed up the database and restored it... Still no effect. Still 1.9Gb

Why?

Whatever procedure I eventually find needs to be replayable on a client machine with access to nothing other than OSql or similar.

A: 

You also have to modify the minimum size of the data and log files. DBCC SHRINKDATABASE will shrink the data inside the files you already have allocated. To shrink a file to a size smaller than its minimum size, use DBCC SHRINKFILE and specify the new size.

Otávio Décio
Well the Minimum size is indeed being reported to be the same as the current size. However I can find no way to reduce this value DBCC Shrinkfile appears to be having no effect.
Rory Becker
A: 

You will also need to shrink the individual data files.

It is however not a good idea to shrink the databases. For example see here

no_one
A: 

Try to use the sp_spaceused procedure to check if there is unused space in the tables.

Marek Grzenkowicz
Tried this with a single table 'Projects' name rows reserved data index_size unusedResults before = Project 139665 47936 KB 41560 KB 5272 KB 1104 KBResults after = Project 73058 50496 KB 40712 KB 5576 KB 4208 KBSeems that ~50% less rows has next to no effect... .WTF!?!?
Rory Becker
You did use the @updateusage=true parameter, right?
Marek Grzenkowicz
A: 

DBCC SHRINKDATABASE works for me,but this is full syntax

DBCC SHRINKDATABASE ( database_name ,[target_percent],[truncate] )

target_percent

Is the desired percentage of free space left in the database file after the database has been shrunk.

TRUNCATE parameter can be

NOTRUNCATE

Causes the freed file space to be retained in the database files. If not specified, the freed file space is released to the operating system.

TRUNCATEONLY

Causes any unused space in the data files to be released to the operating system and shrinks the file to the last allocated extent, reducing the file size without moving any data. No attempt is made to relocate rows to unallocated pages. target_percent is ignored when TRUNCATEONLY is used.

...and yes no_one is right, shrinking datbase is not very good practice becasue for example :

shrink on data files are excellent ways to introduce significant logical fragmentation, becasue it moves pages from the end of the allocated range of a database file to somewhere at the front of the file...

shrink database can have a lot of consequence on database, server.... think a lot about it before you do it!

on the web there are a lot of blogs and articles about it

Cicik
This will not be a regular thing... I am doing it because iother related DBs are nearing 2Gb and this is the limit for MSDE2000. In theory I'm deleting 50% of the "Project" Records and all items relating to them in many other tables... Projects being roughly 50% of the system.
Rory Becker
+1  A: 

Here's another solution: Use the Database Publishing Wizard to export your schema, security and data to sql scripts. You can then take your current DB offline and re-create it with the scripts.

Sounds kind of foolish, but there are a couple advantages. First, there's no chance of losing data. Your original db (as long as you don't delete your DB when dropping it!) is safe, the new DB will be roughly as small as it can be, and you'll have two different snapshots of your current database - one ready to roll, one minified - you can choose from to back up.

Will
Very interesting. I will try this locally to give an idea of what minimum size should truly be. However I need to be able to run this process on several client's machine. Preferably by running a single script rather than Export/Import of database.
Rory Becker
Any Idea of the typical ratio of DB size vs Script Size?
Rory Becker
I have generated a script for the example 1.3Gb Database and it has come out at 9.8 Gb. I'm not even sure how to run a script this big.
Rory Becker
That's pretty damn wild. I think you're in a whole other universe of big on this one.
Will
+1  A: 

This may seem bizarre, but it's worked for me and I have written a C# program to automate this.

Step 1: Truncate the transaction log (Back up only the transaction log, turning on the option to remove inactive transactions)

Step 2: Run a database shrink, moving all the pages to the start of the files

Step 3: Truncate the transaction log again, as step 2 adds log entries

Step 4: Run a database shrink again.

My stripped down code, which uses the SQL DMO library, is as follows:

SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_NoTruncate);
SQLDatabase.TransactionLog.Truncate();
SQLDatabase.Shrink(5, SQLDMO.SQLDMO_SHRINK_TYPE.SQLDMOShrink_Default);
Bork Blatt
+2  A: 

ALTER DATABASE MyDatabase SET RECOVERY SIMPLE

GO

DBCC SHRINKFILE (MyDatabase_Log, 5)

GO

ALTER DATABASE MyDatabase SET RECOVERY FULL

GO

This worked for me. Thanks.
Mac
A: 

Delete data, make sure recovery model is simple, then skrink (either shrink database or shrink files works). If the data file is still too big, AND you use heaps to store data -- that is, no clustered index on large tables -- then you might have this problem regarding deleting data from heaps: http://support.microsoft.com/kb/913399

onupdatecascade
A: 

You should use:

dbcc shrinkdatabase (MyDB)

It will shrink the log file (keep a windows explorer open and see it happening).

Eduardo
Naturally it won't touch your data, but only the log file.
Eduardo
A: 

"Therefore it's reasonable to assume much space should now be retrievable."

Apologies if I misunderstood the question, but are you sure it's the database and not the log files that are using up the space? Check to see what recovery model the database is in. Chances are it's in Full, which means the log file is never truncated. If you don't need a complete record of every transaction, you should be able to change to Simple, which will truncate the logs. You can shrink the database during the process. Assuming things go right, the process looks like:

  1. Backup the database!
  2. Change to Simple Recovery
  3. Shrink db (right-click db, choose all tasks > shrink db -> set to 10% free space)
  4. Verify that the space has been reclaimed, if not you might have to do a full backup

If that doesn't work (or you get a message saying "log file is full" when you try to switch recovery modes), try this:

  1. Backup
  2. Kill all connections to the db
  3. Detach db (right-click > Detach or right-click > All Tasks > Detach)
  4. Delete the log (ldf) file
  5. Reattach the db
  6. Change the recovery mode

etc.

Tom