views:

17

answers:

2

I have a long running redgate script that is applying a bunch of schema type changes to a database. It is going to take 3 hours to run. This script will be run on a production database that has mirroring and transaction log shipping in place.

My specific question is how is transaction log shipping going to be affected by a huge redgate generated script? Its configured: backed up every 15 minutes backed up to local drive shipped to dr server drive applied every 30 mins kept for 60 mins

will it still incrementally be shipping the changes, or if there's one redgate transaction it won't get shipped until it completes?

Concern is that 1. the long running script won't be affected by this transaction log shipping (given its going to span several backups) 2. whether the changes will be shipped incrementally or as one big dump - as I thought redgate typically used one transaction so if it fails it rolls back everything? I know the log file increases a total of about 80 gig so am trying to ensure there is enough room for the transaction log shipping to store whatever it needs to store.

Thanks!

A: 

You should be able to tell from inspecting the RedGate script whether it is all one big transaction. Just grep the sql files for "begin transaction" to get an idea.

If it is, then your transaction log shipping won't ship it until the transaction has completed and been committed, so it doesn't matter that it spans syncs. I'm pretty sure this is the case - I'm basing this on this article http://msdn.microsoft.com/en-us/library/ms151706.aspx which says:

The distribution database [is] the store-and-forward queue from which changes are sent to Subscribers ..

"Only committed transactions are sent to the distribution database."

imoatama
Thanks! Doing a grepwin gave me many instances of begin transaction - but turns out the stored proc text includes transactions so that was a little misleading :) Looks like there is a begin transaction at the end and a commit transaction at the end - but throughout the script is this:IF @@TRANCOUNT=0 BEGIN INSERT INTO #tmpErrors (Error) SELECT 1 BEGIN TRANSACTION END
Jen
I think SQL Compare 8 lets you specify the size of transactions but we are using 5 :( Tried to highlight to the dba that I thought there would be a massive log file to ship but he disagreed. Transferring 90 gig is going to be fun ;)
Jen
Do you mean a begin transaction at the beginning? (you said end). The BEGIN .. END statements with nested BEGIN TRANSACTION throughout seem to be ratcheting the start of the transaction in case an error occurs, or something. For more info on what it might be doing I recommend you look at these 3 articles:http://msdn.microsoft.com/en-us/library/ms188929.aspxhttp://msdn.microsoft.com/en-us/library/ms190487.aspxhttp://msdn.microsoft.com/en-us/library/ms187967.aspx
imoatama
Also I'd search for occurrences of 'rollback transaction' to better understand the redgate code. But yes, it sounds like it's one 80GB transaction - you might have to get creative on how to do the log shipping if your two servers aren't co-located!
imoatama
A: 

OK so I made it through my upgrade (yay!) and discovered that it didn't ship the entire thing as one big chunk. From their dba I got this information:

It doesn't do it as one big chunk... you'll just have bigger TRN files as you go along. The more often you take TRN backups and ship them and apply them, the smaller you can keep it. However, taking backup obviously requires cpu + i/o... so you don't want to run it continuously.

so whilst I thought the log file would grow to 90g.. and then try to ship some kind of 90g file across it didn't. It just incrementally filled up the transaction log shipping folder and the 60g it had was sufficient for the upgrade :)

Jen