tags:

views:

378

answers:

3

Do you have any formal or informal standards for reasonably achievable SQL query speed? How do you enforce them? Assume a production OLTP database under full realistic production load of a couple dozen queries per second, properly equipped and configured.

Personal example for illustrative purposes (not a recommendation, highly contingent on many factors, some outside your control):

Expectation:

Each transactional unit (single statement, multiple SQL statements from beginning to end transaction boundaries, or a single stored procedure, whichever is largest) must execute in 1 second or less on average, without anomalous outliers.

Resolution:

Slower queries must be optimized to standard. Slow queries for reports and other analysis are moved to an OLAP cube (best case) or a static snapshot database.

(Obviously some execution queries (Insert/Update/Delete) can't be moved, so must be optimized, but so far in my experience it's been achievable.)

+4  A: 

Given that you can't expect deterministic performance on a system that could (at least in theory) be subject to transient load spikes, you want your performance SLA to be probabilistic. An example of this might be:

95% of transactions to complete within 2 seconds.
95% of search queries (more appropriate for a search screen) to complete within 10 seconds.
95% of operational reports to complete within 10 seconds.

Transactional and search queries can't be moved off transactional system, so the only actions you can take are database or application tuning, or buying faster hardware.

For operational reports, you need to be ruthless about what qualifies as an operational report. Only reports that absolutely need to have access to up-to-date data should be run off the live system. Reports that do a lot of I/O are very anti-social on a production system, and normalised schemas tend to be quite inefficient for reporting. Move any reports that do not require real-time data off onto a data warehouse or some other separate reporting facility.

ConcernedOfTunbridgeWells
+1  A: 

I usually go by the one second rule when writing/refactoring stored procedures, although my workplace doesn't have any specific rules about this. It's just my common sense. Experience tells me that if it takes up to ten seconds or more for a procedure to execute, which doesn't perform any large bulk inserts, there are usually serious problems in the code that can easily be corrected.

They way most common problem I encounter in SP:s with poor performance is incorrect use of indexes, causing costly index seek operations.

jandersson
+1  A: 

O of N is good and anything worse like N^2 will eventually be too slow.

rabble