views:

638

answers:

8

I have a number of stored procedures I call from code with ExecuteNonQuery.

It was all good but 2 of my stored procedures started timing out intermittently today with:

Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding. The statement has been terminated.

If I execute the sp manually from management studio it's still all good.

Nothing recently changed in my db - my command timeout is the default one.

Any clue?

EDIT

the table against the SPs are running it's huge --> 15 Gigs. Rebooted the box - same issue but this time can't get the sp to run from Management Studio either.

Thanks!

+2  A: 

Is you command timeout set? Has something in your db recently changed that is causing this proc to take longer?

If you are have to diagnose locking issues, you will need to use something like sp_lock.

Can you share the source of one of your procs?

http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlcommand.commandtimeout.aspx

Sam Saffron
I am doing pretty extensively transactions - it was all good till a few hours ago - not too sure setting a higher timeout will help.
JohnIdol
You can set it to 0 meaning it will not timeout
Sam Saffron
Also, if its fast in studio, maybe a transaction is blocking it. you need to diagnose with sp_lock
Sam Saffron
+5  A: 

Management studio sets an infinite timeout on queries/commands it runs. Your database connection from code will have a default timeout which you can change on the command object.

Sam Meldrum
You can inspect the connection properties by setting up a trace in SQL Server Profiler. From here, you'll be able to see the default timeout setting being used, if a value is not explicitly set within your code
Russ Cam
It happens pretty fast on management studio though (less than 1 sec) and on my code it is timing out after 30 secs or so.
JohnIdol
Connection timeout and command timeout should not be confused, connection timeout is the max time it would take to connect to the db
Sam Saffron
@sambo99 - thanks for the correction.
Sam Meldrum
@JohnIdol You may want to take a look at query profiler to see what is going on. It may give you some extra clues. May give you some commands you can run through query analyzer to see if the query plan is the same as the one being run through management studio.
Sam Meldrum
Changing the timeout is the worst thing you can do. You need to diagnose and find the problem not just let it take more time.
HLGEM
Changing the timeout is not the worst thing you can do. If a query takes > 30 sec there's nothing you CAN do except change the timeout. Try querying 20-40 million rows and see how long that takes...
Matthew Brubaker
@HLGEM - Yes, but this would explain the difference between running in management studio and running in the code which was the question. The question was not about how to optimise his SP/query.
Sam Meldrum
it started taking indefinitely long times to execute on management studio as well now ---- haven't a clue!
JohnIdol
+4  A: 

This can often relate to:

  • bad query plans due to over-eager plan-reuse (parameter sniffing)
  • different SET options - in particular ANSI_NULLS and CONCAT_NULL_YIELDS_NULL
  • locking (you might have a higher isolation level)
  • indexing needs to be rebuilt / stats updated / etc

The SET options can lead to certain index types not being usable (indexes on persisted calculated columns, for example - including "promoted" xml/udf queries)

Marc Gravell
Good call on the Parameter sniffing - particularly pertinent to SQL Server 2000. I understand that parmeter sniffing is not a problem in SQL Server 2005 onwards. Good article on how to combat parameter sniffing here http://blogs.msdn.com/khen1234/archive/2005/06/02/424228.aspx
Russ Cam
+4  A: 

Try to recompile these procedures. I've such problems few times and didn't find the cause of problem, but recompiling always helps.

EDIT:

To recompile proc, you go to management studio, open procedure to modify and hit F5 or execute: EXEC sp_recompile 'proc_name'

Michal Dymel
worth a shot! how do I do that?
JohnIdol
the best is to EXEC sp_recompile 'proc_name'
Michal Dymel
I've seen this happen to views too where they need to be recreated after the source table(s) changed. Definitely worth a try.
Adam
+1  A: 

You might need to update statistics on the database. Also has indexing on the table changed recently?

Check the execution plan of the sp to see if you can find the bottleneck. Even if it ran ok before, it can probably be tuned to run more efficiently.

Also how much data are you returning? We have had issues with poorly designed SQL in the past that didn't show up until the cumulative report starting having more data in the result set. Not knowing wht your sps do, it is hard to say if this is a possibilty, but it is worth mentioning for you to investigate.

HLGEM
+1  A: 

SQL Server will wait indefinitely before returning to the user. More than likely there was a client side timeout property set. For example you can set a timeout property for the ADO command object.

Andy Jones
A: 

Ok - this is how I fixed it in the end.

A clustered index on a table with 45 million records was killing my SQL server - every insert from code was resulting in the nasty timeouts described in the answer. Increasing the timeout tolerance wasn't gonna solve my scalability issues so I played around with indexes and making the clustered index on the primary key nonclustered unlocked the situation.

I'd appreciate comments on this to better understand how this fixed the problem.

JohnIdol
Was the PK integer based? If it was, no sense in clustering sequential data (right?). I've had situations in SQL2000 where a table had to be fully recreated before an index would be 'repaired', that even removing/reindexing couldn't fix.
Adam
Yes it was an integer - it was clustered by default as far as I know (I didn't make it clustered explicitly)
JohnIdol
A: 

Get the SQL profiler on it, compare results between running it in Management studio and via your app.

Andy Sweetman