tags:

views:

416

answers:

4

Hi to all,

I have seen that our database has the full recovery model and has a 3GB transaction log.

As the log gets larger, how will this affect the performance of the database and the performance of the applications that accesses the database?

JD

A: 

If the log gets fragmented or needs to grow it will slow down the application.

Shiraz Bhaiji
A: 

If you don't clear the transaction log periodically by performing backups, the log will get full and consume the entire available disk space.

DOK
+1  A: 

In a couple of ways.

If your system is configured to auto-grow the transaction log, as the file gets bigger, your SQL server will need to do more work and you will potentially get slowed down. When you finally do run out of space, you're out of luck and your database will stop taking new transactions.

You need to get with your DBA (or maybe you are the DBA?) and perform frequent, periodic log backups. Save them off your server onto another dedicated backup system. As you back up the log, the space in your existing log file will be reclaimed, preventing the log from getting much bigger. Backing up the transaction log will also allow you to restore your database to a specific point in time after your last full or differential backup, which significantly cuts your data losses in the event of a server failure.

Dave Markle
+1  A: 

The recommended best practice is to assign a SQL Server Transaction log file its’ very own disk or LUN.

This is to avoid fragmentation of the transaction log file on disk, as other posters have mentioned, and to also avoid/minimise disk contention.

The ideal scenario is to have your DBA allocate sufficient log space for your database environment ahead of time i.e. to allocate say x GB of data in one go. On a dedicated disk this will create a contiguous allocation, thereby avoiding fragmentation.

If you need to grow your transaction log, again you should endeavour to do so in sizeable chunks in order to endeavour to allocate contiguously.

You should also look to NOT shrink your transaction log file as, repeated shrinking and auto growth can lead to fragmentation of the data file on disk.

I find it best to think of the autogrowth database property as a failsafe i.e. your DBA should proactively monitor transaction log space (perhaps by setting up alerts) so that they can increase the transaction log file size accordingly to support your database usage requirements but the autogrowth property can be in place to ensure that your database can continue to operate normally should unexpected growth occur.

A larger transaction log file in itself if not detrimental to performance as SQL server writes to the log sequentially, so provided you are managing your overall log size and allocation of additional space appropriately you should not be concerned.

John Sansom