I have a long running redgate script that is applying a bunch of schema type changes to a database. It is going to take 3 hours to run. This script will be run on a production database that has mirroring and transaction log shipping in place.
My specific question is how is transaction log shipping going to be affected by a huge redgate generated script? Its configured: backed up every 15 minutes backed up to local drive shipped to dr server drive applied every 30 mins kept for 60 mins
will it still incrementally be shipping the changes, or if there's one redgate transaction it won't get shipped until it completes?
Concern is that 1. the long running script won't be affected by this transaction log shipping (given its going to span several backups) 2. whether the changes will be shipped incrementally or as one big dump - as I thought redgate typically used one transaction so if it fails it rolls back everything? I know the log file increases a total of about 80 gig so am trying to ensure there is enough room for the transaction log shipping to store whatever it needs to store.
Thanks!