tags:

views:

759

answers:

5

So I have been neglecting to do any backups of my fogbugz database, and now the fogbugz ldf file is over 2 and half gigs. Thats been built up over the six months we've been using fogbugz.

I backed up the database, then I backed up, and truncated the transaction log, yet the transaction log is still 2 and a half gigs. I did a shrink on the log file and its still 2 and a half gigs. Nothing I do seems to shrink the file in size.

Is there anyway to fix the problem? Or is the only way back at this point to detach the database, delete the log file and then reattach with a new one?

A: 
  1. Ensure the database's backup mode is set to Simple (see here for an overview of the different modes). This will avoid SQL Server waiting for a transaction log backup before reusing space.

  2. Use dbcc shrinkfile or Management Studio to shrink the log files.

Step #2 will do nothing until the backup mode is set.

Richard
How I do I use dbcc shrinkfiles in Management Studio? Is that any different than selecting Tasks->Shrink->Files? Because I've done that and the file doesn't get any smaller.Also, I can easily set fogbugz to simple and just solve the problem, but I feel like I'm missing something here, like the log file should go down in size and its not.
Jonathan Beerhalter
@WindyCityEagle: I would suggest reading up on SQL Server backups in BOL, there the details are explained. Quickly: in full backup mode all transactions are retained in the log files until a backup of the log file is performed, then that space can be reused. This makes sense in many production environments (backups always absolutely consistent), but in simpler cases is too much effort. So simple mode works.
Richard
@WindyCityEagle: "dbcc..." is what you use in SQL (from whatever client), in management studio it is Tasks | Shrink | ... but that's just a GUI wrapper around the dbcc command.
Richard
@Richard: In general you're correct (about how and when the log gets truncated). However, there are things that can prevent the truncate from happening (open transaction, unreplicated transactions, etc.)
Adam Robinson
+1  A: 

Welcome to the fickle world of SQL Server log management.

SOMETHING is wrong, though I don't think anyone will be able to tell you more than that without some additional information. For example, has this database ever been used for Transactional SQL Server replication? This can cause issues like this if a transaction hasn't been replicated to a subscriber.

In the interim, this should at least allow you to kill the log file:

  1. Perform a full backup of your database. Don't skip this. Really.
  2. Change the backup method of your database to "Simple"
  3. Open a query window and enter "checkpoint" and execute
  4. Perform another backup of the database
  5. Change the backup method of your database back to "Full" (or whatever it was, if it wasn't already Simple)
  6. Perform a final full backup of the database.

You should now be able to shrink the files (if performing the backup didn't do that for you).

Good luck!

Adam Robinson
Yeah! That is quite fickle indeed.
Jonathan Beerhalter
this technique is not recommended - http://www.sqlskills.com/BLOGS/PAUL/post/Importance-of-proper-transaction-log-size-management.aspx
Nick Kavadias
...which is why I said it can accomplish what he needs in the INTERIM, until he figures out what is preventing his logs from being truncated.
Adam Robinson
A: 

see this answer on a similar question

Christian Hagelid
A: 

Following code always shrinks the Truncated Log File to minimum size possible.

  USE DatabaseName
  GO
  DBCC SHRINKFILE(, 1) BACKUP LOG  WITH TRUNCATE_ONLY DBCC SHRINKFILE(, 1)
  GO
Chad Grant
TRUNCATE_ONLY is deprecated, just FYI.
Adam Robinson
A: 

You have to shrink & backup the log a several times to get the log file to reduce in size, this is because the the log file pages cannot be re-organized as data files pages can be, only truncated. For a more detailed explanation check this out.

detaching the db & deleting the log file is dangerous! don't do this unless you'd like data loss

Nick Kavadias