views:

240

answers:

9

I want to know what are various methods by which I can monitor which of my procedure, query is taking more time on various components(CPU cycle, scan time etc.) than already set threshold value.

I want it to be logged as well. Whenever uses my site and calling some procedure. I want to make a log of all procedures crossing my threshold.

Is it possible to do it with sql queries or procedures. Do we have some procedures for this. Any sql tools or any external tool, can be paid(with trial) or free. I want to try them on my database.

+4  A: 

If you have SQL 2005 + you can run the standard reports in management reports Right Click on database in management studio and select Object execution statistics - this only works since last reboot though. You can also query that using DMV's sys.dm_exec_query_stats

If you are on an older version of SQL use SQL Server Profiler this will show you execution time, reads, cpu, writes and you can filter on thresholds for any of these.

u07ch
+1  A: 

On the paid tool side, I'd highly recommend Ignite for SQL Server from Confio Software. You can download a free trial and take it for a spin. We've been using it for about 2 years now and are very happy with the results. In our case, we picked off several "low hanging fruit" stored procedures that we optimized for quick wins just in the time we ran the demo. That was enough to convince us that it was a worthwhile investment.

Joe Stefanelli
A: 

You could take a look at RedGate's product SQL Response.

SQL Response captures information relevant to each alert raised, so you can view what has occurred and when to diagnose the cause of the issue. If you need finer details for your analysis, capture SQL Profiler trace data with SQL Response for an efficient examination of SQL statements executed around the time alerts are raised.

Supported alert types include:

  • Long running SQL queries
  • ...

It currently costs from $495. They offer a 14 day free trial.

Mark Byers
+3  A: 

You should be able to do this using Dynamic Management Views (DMVs) in particular you are probably going to be most interested in the exec_query_stats view which maintains execution statistics on all queries (CPU time, Physical / Logical reads etc...) grouped by execution plan.

Also see this excellent article which includes a sample query for viewing plan statistics, and goes into a lot more detail on the subject:

Finally, if you want to trace / record excessively long running queries, then you might want to consider leaving an SQL server profiler trace running at all times, with a filter on execution time set to some high figure (e.g. > 1000 ms). You can either use the SQL server profiler windows application, or you can create the trace using T-SQL have have it log to a table in the database instead:

This has the benefit of telling you exactly what query took exactly how long, when and what the parameters to that query were (holy SQL Batman!)

The performance implications of running this trace on loaded databases is in fact very small - I know of surprisingly critial applications which have these traces running as a matter of routine in order to be able to quickly diagnose performance issues (and it does help a lot). The key is in choosing a "large" execution time which is large enough to not swamp the log, yet small enough to pick up enough long running queries to be useful.

Another trick that has been used in the past when having performance issues was to leave an unfiltered SQL server trace running for a short period of time (1 min or so) on a loaded SQL server (it really does have surprisingly little effect, you just get swamped with logs)

I also heartily recommend the Microsoft SQL Server internals books on this subject - it is very technical, however its brilliant because it covers not only these sorts of diagnosis tools, but also what they actually mean

Kragen
There is a whole wealth of information on Google on how to use Dynamic Management Views to extract all sorts of performance information from SQL server.
Kragen
A: 

If you were using SQL Server 2008, you can experiment with Resource Governor which allows you to throttle and monitor CPU and memory workload.

For logging purposes, you can use Data Collection. You can create a custom collection set that gather snapshots from any DMVs and upload them to a performance data warehouse.

Idera has charting tools that display resource consumption per query. I think it allows user to set threshold and email alerts too.

Ching Chang
+2  A: 
Geoff
A: 

These are what I normally use. I can't remember where I got them from, but they work a treat. This one's for longest-running queries:

SELECT TOP 100
    qs.total_elapsed_time / qs.execution_count / 1000000.0 AS average_seconds,
    qs.total_elapsed_time / 1000000.0 AS total_seconds,
    qs.execution_count,
    SUBSTRING (qt.text,qs.statement_start_offset/2, 
         (CASE WHEN qs.statement_end_offset = -1 
            THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 
          ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS individual_query,
    o.name AS object_name,
    DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
    LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
WHERE qt.dbid = DB_ID()
ORDER BY average_seconds DESC;

...and this one gives a list of queries using the most I/O:

SELECT TOP 100
    (total_logical_reads + total_logical_writes) / qs.execution_count AS average_IO,
    (total_logical_reads + total_logical_writes) AS total_IO,
    qs.execution_count AS execution_count,
    SUBSTRING (qt.text,qs.statement_start_offset/2, 
         (CASE WHEN qs.statement_end_offset = -1 
            THEN LEN(CONVERT(NVARCHAR(MAX), qt.text)) * 2 
          ELSE qs.statement_end_offset END - qs.statement_start_offset)/2) AS individual_query,
    o.name AS object_name,
    DB_NAME(qt.dbid) AS database_name
FROM sys.dm_exec_query_stats qs
    CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) as qt
    LEFT OUTER JOIN sys.objects o ON qt.objectid = o.object_id
WHERE qt.dbid = DB_ID()
ORDER BY average_IO DESC;

Hope these are of some help!

Billious
A: 

From the winning answer to a recent SO question, this will give you the top 50 most used procs and the statements in the procs. You can change the TOP 50 to TOP 1 or any other number you'd like to see.

SELECT TOP 50 * FROM(SELECT COALESCE(OBJECT_NAME(s2.objectid),'Ad-Hoc') AS ProcName,
  execution_count,s2.objectid,
    (SELECT TOP 1 SUBSTRING(s2.TEXT,statement_start_offset / 2+1 ,
      ( (CASE WHEN statement_end_offset = -1
  THEN (LEN(CONVERT(NVARCHAR(MAX),s2.TEXT)) * 2)
ELSE statement_end_offset END)- statement_start_offset) / 2+1)) AS sql_statement,
       last_execution_time
FROM sys.dm_exec_query_stats AS s1
CROSS APPLY sys.dm_exec_sql_text(sql_handle) AS s2 ) x
WHERE sql_statement NOT like 'SELECT * FROM(SELECT coalesce(object_name(s2.objectid)%'
and OBJECTPROPERTYEX(x.objectid,'IsProcedure') = 1
and exists (SELECT 1 FROM sys.procedures s
WHERE s.is_ms_shipped = 0
and s.name = x.ProcName )
ORDER BY execution_count DESC

You could run this periodically to see the changes over time. Put it in a stored procedure and insert the results into a log table. Set up a scheduled job to run that sproc at fixed intervals.

DOK
A: 

Also note that in SQL 2008 there is a new DMV specifically for procedures: sys.dm_exec_procedure_stats. This view will give you cumulative stats for the entire procedure, including time and CPU spent doing non-query related work such as WAITFOR, function calls and conditional logic.

Please do be cautious of running profiler traces on your production servers. Tracing introduces overhead on the server, even minimal tracing. Depending on how busy your server is, this could actually be noticeable. Adding a filter to the trace definition does not reduce this overhead, SQL Server still needs to process each event fired to determine whether or not it meets the filter criteria. Filtering only serves to reduce the size of the trace file.

Another tool you may want to look into if you're dealing with SQL 2008 is the Data Collector and Management Data Warehouse. This is a tool that ships with SQL Server (no extra cost) that will allow you to collect query statistics on a regular basis and upload them to a data warehouse. It comes with built-in collectors and reports that are quite comprehensive for the most common monitoring needs.

http://msdn.microsoft.com/en-us/library/dd939169(SQL.100).aspx

Of all the things mentioned above, I think your best bet is sys.dm_exec_query_stats as it is low-overhead, free, and available on both SQL 2005 and SQL 2008.

bluefooted