views:

167

answers:

8

Jeff mentioned in one of the podcasts that one of the things he always does is put in instrumentation for database calls, so that he can tell what queries are causing slowness etc. This is something I've measured in the past using SQL Profiler, but I'm interested in what strategies other people have used to include this as part of the application.

Is it simply a case of including a timer across each database call and logging the result, or is there a 'neater' way of doing it? Maybe there's a framework that does this for you already, or is there a flag I could enable in e.g. Linq-to-SQL that would provide similar functionality.

I mainly use c# but would also be interested in seeing methods from different languages, and I'd be more interested in a 'code' way of doing this over a db platform method like SQL Profiler.

+1  A: 

If a query is more then just a simple SELECT on a single table I always run it through EXPLAIN if I am on MySQL or PostgreSQL. If you are using SQL Server then Management Studio has a Display Estimated Execution Plan which is essentially the same. It is useful to see how the engine will access each table and what indexes it will use. Sometimes it will surprise you.

John Downey
A: 

If you're writing queries in SQL Management Studio you can enter: SET STATISTICS TIME ON and SQl Server will tell you how long the individual parts of a query took to parse, compile and execute. You might be able to log this information by handling the InfoMessage event of the SqlConnection class (but I think using the SQL Profiler is much easier.)

BTB
+1  A: 

Recording the database calls, the gross timing and the number of records (bytes) returned in the application is useful, but it's not going to give you all the information you need.

It might show you usage patterns you were not expecting. It might show where your using "row-by-row" access instead of "set based" operations.

The best tool to use is SQL Profiler and analyse the number of "Reads" vs the CPU and duration. You want to avoid high CPU queries, high Read's and long durations (duh!).

The "group by reads" is a useful feature to bring to the top the nastiest queries.

Guy
A: 

I would have thought that the important thing to ask here is "what database platform are you using?"

For example, in Sybase, installing MDA tables might solve your problem, they provide a whole bunch of statistics from procedure call usage to average logical I/O, CPU time and index coverage. It can be as clever as you want it to be.

ninesided
A: 

I definitely see the value in using SQL Profiler while you're app is running, and EXPLAIN or SET STATISTICS will give you information about individual queries, but does anyone routinely put measurement points into their code to gather information about database queries ongoing - that would pick up on for example, a query on a table that performs fine initially, but as the number of rows grows, becomes slower and slower.

If you're using MySQL or Postgre there's various tools for seeing query activity in real time, but I haven't found a tool as good as the SQL Profiler for measuring query performance over time.

I'm wondering if there is (or should be?) something similar to ELMAH in the way it just plugs in and gives you information without much additional effort?

Whisk
A: 

If you're using MySQL or Postgre there's various tools for seeing query activity in real time, but I haven't found a tool as good as the SQL Profiler for measuring query performance over time.

Can anyone give a list of these real-time stats tools for MySQL or PostGRESQL?

kaybenleroll
A: 

If you're into Firebird you may want to watch sinatica.com.
We'll soon launch a real-time monitoring tool for Firebird DBAs.

< /shameless plug>

Douglas Tosi
A: 

If you use Hibernate (I use the Java version, I'd imagine NHibernate has something similar), you can have Hibernate collect statistics about lots of different things. See, for example:

http://www.javalobby.org/java/forums/t19807.html

James Kingsbery