views:

1034

answers:

3

We are using C# and Linq2SQL to get data from a database for some reports. In some cases this takes a while. More than 30 seconds, which seems to be the default CommandTimeout.

So, I guess I have to up the CommandTimeout. But question is, how much? Is it bad to just set it very high? Wouldn't it be bad if a customer was trying to do something and just because he happend to have a lot more data in his database than the average customer he couldn't get his reports out because of timeouts? But how can I know how much time it potentially could take? Is there some way to set it to infinity? Or is that considered very bad?

And where should I set it? I have a static database class which generates a new datacontext for me when I need it. Could I just create a constant and set it whenever I create a new datacontext? Or should it be set to different values depending on the usecase? Is it bad to have a high timeout for something that wont take much time at all? Or doesn't it really matter?

Too high ConnectionTimeout can of course be more annoying. But is there a case where a user/customer would like something to time out? Can the SQL server freeze so that a command never finishes?

A: 

CommandTimeout etc should indeed only be increased on per-specific-scenario basis. This can avoid unexpectedly long blocking etc scenarios (or worse: the undetected deadlock scenario). As for how high... how long does the query take? Add some headroom and you have your answer.

The other thing to do, of course, is to reduce the time the query takes. This might mean hand-optimising some TSQL in a sproc, usually in combination with checking the indexing strategy, and perhaps bigger changes such as denormalization, or other schema changes. This might also involve a data-warehousing strategy so you can shift load to a separate database (away from the transactional data), with a schema optimised for reporting. Maybe a star-schema.

I wouldn't set it to infinity... I don't expect it to take forever to run a report. Pick a number that makes sense for the report.

Yes, SQL Server can freeze so that a command never finishes. An open blocking transaction would be the simplest... get two and you can deadlock. Usually the system will detect a local deadlock - but not always, especially if DTC is involved (i.e. non-local locks).

Marc Gravell
How can I check the indexing strategy? There are currently no indexes added I think, but how can I see where I should add them? (Are primary keys already indexed?)
Svish
A: 

IMHO, An advanced option for your user to set the ConnectionTimeout value would be better than any constant value you determine.

tafa
A: 

Primary keys will have a clustered index on them by default. I found the following script (i think it was on msdn) that will generate the code to create any indexes SQL server thinks will be useful (def works on SQL2008, I think this was introduced in 2005):

SELECT 

  migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) AS improvement_measure, 

  'CREATE INDEX [missing_index_' + CONVERT (varchar, mig.index_group_handle) + '_' + CONVERT (varchar, mid.index_handle) 

  + '_' + LEFT (PARSENAME(mid.statement, 1), 32) + ']'

  + ' ON ' + mid.statement 

  + ' (' + ISNULL (mid.equality_columns,'') 

    + CASE WHEN mid.equality_columns IS NOT NULL AND mid.inequality_columns IS NOT NULL THEN ',' ELSE '' END 

    + ISNULL (mid.inequality_columns, '')

  + ')' 

  + ISNULL (' INCLUDE (' + mid.included_columns + ')', '') AS create_index_statement, 

  migs.*, mid.database_id, mid.[object_id]

FROM sys.dm_db_missing_index_groups mig

INNER JOIN sys.dm_db_missing_index_group_stats migs ON migs.group_handle = mig.index_group_handle

INNER JOIN sys.dm_db_missing_index_details mid ON mig.index_handle = mid.index_handle

WHERE migs.avg_total_user_cost * (migs.avg_user_impact / 100.0) * (migs.user_seeks + migs.user_scans) > 10

ORDER BY migs.avg_total_user_cost * migs.avg_user_impact * (migs.user_seeks + migs.user_scans) DESC
SillyMonkey