views:

277

answers:

1

I've got a log file on my database whose size is 303 MB and %used is 3.19.

I'm pretty sure the transaction volume is limited to a few tables. The database and log are backed up nightly.

How do I assess my log to determine:

  • whether I can shrink the allocated space, and thus the log
  • how big my log grows daily/weekly?

Do I check the logspace before I run the transaction log and full database backup?

Is there a way to log the log size in a table each night before/after these jobs run?

+1  A: 

Just run

dbcc sqlperf(logspace)

before backups, but don't undersize the logs, allow for growth, and abnormally busy days or activity. Leaving autogrow configured for the logfile won't hurt if you have ample space, but you don't want it growing every day or you'll suffer from disk file fragmentation. There's no rule of thumb for sizing the logs, it depends entirely on the transaction volume.

If you want to capture the output, build a table to hold the results:

create table logspace (name varchar(64), size float, used float, status int)

.. then run this:

insert into #d exec('dbcc sqlperf(logspace)')

You could add a timestamp to the file a couple of different ways if you want to track it over time. I would run it before the backups.

SqlACID