views:

90

answers:

5

I am running a bunch of database migration scripts. I find myself with a rather pressing problem, that business is waking up and expected to see their data, and their data has not finished migrating. I also took the applications offline and they really need to be started back up. In reality "the business" is a number of companies, and therefore I have a number of scripts running SPs in one query window like so:

EXEC [dbo].[MigrateCompanyById] 34
GO
EXEC [dbo].[MigrateCompanyById] 75
GO
EXEC [dbo].[MigrateCompanyById] 12
GO
EXEC [dbo].[MigrateCompanyById] 66
GO

Each SP calls a large number of other sub SPs to migrate all of the data required. I am considering cancelling the query, but I'm not sure at what point the execution will be cancelled. If it cancels nicely at the next GO then I'll be happy. If it cancels mid way through one of the company migrations, then I'm screwed.

If I cannot cancel, could I ALTER the MigrateCompanyById SP and comment all the sub SP calls out? Would that also prevent the next one from running, whilst completing the one that is currently running?

Any thoughts?

A: 

One way to acheive a controlled cancellation is to add a table containing a cancel flag. You can set this flag when you want to cancel exceution and your SP's can check this at regular intervals and stop executing if appropriate.

AdamRalph
Excellent idea. A colleague just suggested something similar.
Junto
A: 

I was forced to cancel the script anyway.

When doing so, I noted that it cancels after the current executing statement, regardless of where it is in the SP execution chain.

Junto
A: 

Yes it will just stop at whichever part it was on when you cancel it.

Maybe you need to look at optimising those stored procedures/tables/indexes where possible to improve the run time of the job.

Or look up upgrade the hardware, which is probably the most costly.

kevchadders
Hardware on the clustered SQL Servers is already pretty good: - A Quad core 3.2GHz processor - 16Gb Ram - 64 bit Windows Server 2003 R2 Enterprise Edition - 64 bit SQL 2005 EnterpriseI'm looking at optimisation, but it's really the volume of data that I'm moving between the old and new database (different schemas). It is creating 9 GB of transaction log data for every run and that is being replicated to a remote site at the same time.
Junto
Is there any way you can truncate the logs before the copy over to see if that helps redude the file size?
kevchadders
A: 

Are you bracketing the code within each migration stored proc with transaction handling (BEGIN, COMMIT, etc.)? That would enable you to roll back the changes relatively easily depending on what you're doing within the procs.

One solution I've seen, you have a table with a single record having a bit value of 0 or 1, if that record is 0, your production application disallows access by the user population, enabling you to do whatever you need to and then set that flag to 1 after your task is complete to enable production to continue. This might not be practical given your environment, but can give you assurance that no users will be messing with your data through your app until you decide that it's ready to be messed with.

Darth Continent
I was using transactions, but I binned it, mainly because the volume of data was causing massive issues with with overall execution time.Your second idea is something that we implemented today.
Junto
A: 

you can use this method to report execution progress of your script. the way you have it now is every sproc is it's own transaction. so if you cancel the script you will get it update only partly up to the point of the last successfuly executed sproc.

you cna however put it all in a singel transaction if you need all or nothign update.

Mladen Prajdic
That RAISERROR script is cool. I haven't seen that before.
Junto