views:

109

answers:

2

There is a SqlServer2000 Database we have to update during weekend.

It's size is almost 10G.

The updates range from Schema changes, primary keys updates to some Million Records updated, corrected or Inserted.

The weekend is hardly enough for the job.

We set up a dedicated server for the job, turned the Database SINGLE_USER made any optimizations we could think of: drop/recreate indexes, relations etc.

Can you propose anything to speedup the process?

SQL SERVER 2000 is not negatiable (not my decision). Updates are run through custom made program and not BULK INSERT.

EDIT:

Schema updates are done by Query analyzer TSQL scripts (one script per Version update)

Data updates are done by C# .net 3.5 app.

Data come from a bunch of Text files (with many problems) and written to local DB.

The computer is not connected to any Network.

A: 

What exactly does this "custom made program" look like? i.e. how is it talking to the data? Minimising the amount of network IO (from a db server to an app) would be a good start... typically this might mean doing a lot of work in TSQL, but even just running the app on the db server might help a bit...

If the app is re-writing large chunks of data, it might still be able to use bulk insert to submit the new table data. Either via command-line (bcp etc), or through code (SqlBulkCopy in .NET). This will typically be quicker than individual inserts etc.

But it really depends on this "custom made program".

Marc Gravell
+2  A: 

Although dropping excess indexes may help, you need to make sure that you keep those indexes that will enable your upgrade script to easily find those rows that it needs to update.

Otherwise, make sure you have plenty of memory in the server (although SQL Server 2000 Standard is limited to 2 GB), and if need be pre-grow your MDF and LDF files to cope with any growth.

If possible, your custom program should be processing updates as sets instead of row by row.

EDIT: Ideally, try and identify which operation is causing the poor performance. If it's the schema changes, it could be because you're making a column larger and causing a lot of page splits to occur. However, page splits can also happen when inserting and updating for the same reason - the row won't fit on the page anymore.

If your C# application is the bottleneck, could you run the changes first into a staging table (before your maintenance window), and then perform a single update onto the actual tables? A single update of 1 million rows will be more efficient than an application making 1 million update calls. Admittedly, if you need to do this this weekend, you might not have a lot of time to set this up.

Jim McLeod