views:

80

answers:

2

I am working on a SQL Job which involves processing around 75000 records.

Now, the job works fine for 10000/20000 records with speed of around 500/min. After around 20000 records, execution just dies. It loads around 3000 records every 30 mins and stays at similar speed.

I asked a similar question yesterday and got few good suggestions on procedure changes. Here's the link: http://stackoverflow.com/questions/1521692/sql-server-procedure-inconsistent-performance

I am still not sure how to find the real issue here. Here are few of the questions I have:

  1. If problem is tempdb, how can I monitor activities on tempdb?
  2. How can I check if it's the network being the bottleneck?
  3. Are there any other ways to find what is different between when job is running fast and when it slows down?
+2  A: 

I have been the administrator for a couple large data warehouse implementations where this type of issue was common. Although, I can't be sure of it, what it sounds like is that the performance of your server is being degraded by either growing log files or by memory usage. A great tool for reviewing these types of issues is Perfmon.

A great article on using this tool can be found here

Irwin M. Fletcher
ThankYou...sorry for delay...i am trying this tool today ...will get back to you...
BinaryHacker
I tried using this tool, but looks like it needs little more understanding of all those counters than I currently have. It looks like information but it may need an expert to interpret that data. I hoped there was an easier way.
BinaryHacker
+1  A: 

Unless your server is really chimped down, 75000 records should not be a problem for tempdb, so I really doubt that is your problem.

Your prior question indicated SQL Server, so I'd suggest running a trace while the proc is running. You can get statement timings etc from the trace and use that to determine where or what is slowing things down.

You should be running each customer's processing in separate transactions, or small groups of customers. Otherwise, the working set of items that the the ultimate transaction has to write keeps getting bigger and each addition causes a rewrite. You can end up forcing your current data to be paged out and that really slows things down.

Check the memory allocated to SQL Server. If it's too small, you end up paging SQL Server's processes. If it's too large, you can leave the OS without enough memory.

DaveE
Thank Dave...few things: 1. I by running a trace, do you mean running profiler? I tried running profiler but it gives me nothing but the first procedure calls...all internal calls from within that procedure are not captured.2. I am reading this article and will get back to you. For now I have changed bigger transaction to smaller chunks as you suggested.3. How do I know how much memory is allocated to SQL Server????
BinaryHacker
1. Yes. You're right, the standard profiler will only show calls running throught the database interface. I used a tool years ago called CAST Workbench ($$$) that would allow you to see 'into' an SP; I don't know if there's anything like it available anymore. 3. SQL Server Management Sturion, connect to your server, right-click on the server name in the left pane, choose 'Properties', look at the 'Memory' tab.
DaveE