views:

896

answers:

14

What is the most common performance bottleneck that is not caused by the database structure?

A: 

The Applying scalar function to each row in the result set

SELECT fn_EveluateSomeValue([ColumnName]) FROM LargeTable
klashar
Not necessarily bad. If you need to apply the function, you have to do it in the database or in the client. If you do it in the database you get parallelism for free. The real issue is using a scalar function in the WHERE clause.
erikkallen
A: 

Triggers can be a HUGE bottleneck (and headache) when you are processing large batches of commands.

IPX Ares
if you have written them badly
HLGEM
Define badly... usually triggers do more than trivial stuff, like cascading updates and/or insert/updates to other tables from my experience in a variety of work environments.True that most of the time this is not done at an exponential level, but I have see a few occasions where "simple" triggers brought the whole database to a grinding halt.
IPX Ares
They need not be a problem. I've seen an application where each table had a trigger with a few K LOC (!!!). Bad design, but it didn't affect performance too much.
erikkallen
+2  A: 
  1. Server - Like memory and storage types.
  2. Network - Latency and bandwidth issues.
  3. Indexing - not sure if you consider this database structure
  4. Queries - Bad queries with join issues that may cause full table scans.
northpole
IMHO indexes are rather database structure (as well as statistic) than it is database logic.
klashar
A: 

Using cursors. For any database.

Otávio Décio
+2  A: 

Physical.... Running out of memory and having to go to disk.

Robin Day
+3  A: 

table scan because:

  • index does not exist
  • stats out of date
  • functions in where clause prevent usage
KM
+5  A: 

Let's see (in no particular order)

  1. Cursors

  2. non-sargable where clauses

  3. failure to index foreign key fields

  4. failure to index fields commonly used in the where clause

  5. correlated subqueries

  6. accidental cross joins causing the need to distinct the result set

  7. Badly performing code coming from ORMs

  8. Code that causes too many reads or is called more than once when it didn't need to be (I've seen applications that send the same code many times when they didn't need to or every time a page is opened)

  9. network pipe overloaded

  10. User defined functions causing row-by-row processing

  11. Parameter sniffing

  12. out of date statistics

  13. Union instead of union all

HLGEM
A: 

If the I/O system in the server isn't up to the job, you can get latch contention on tempdb and this in turn can cause serious performance problems.

Ken Keenan
A: 

I tend to run into the following bottlenecks (in order of frequency):

  1. Missing or incorrect indices (resulting in table scans)
  2. Poorly written queries
  3. I/O contention
Nathan Harkenrider
A: 

Using table data one row at a time instead of one table at a time (i.e. cursors).

Unnecessary (or badly designed) locks.

Logs logging things that don't need to be logged (Delete from table instead of Truncate table, etc.)

Having constraints on when bulk loading data. Seriously slows down the insert. Turn them off, load the data, then turn them back on.

BoltBait
A: 

Ditto ref cursors, also badly written client apps that get a huge recordset back then process it themselves because the developer didn't understand that sql was anything but a huge heap to hold data in.

Trying to do everything in one huge query instead of breaking it down into sensible steps. Sometimes this will not give SQL a chance to correctly reduce the amount of index (hopefully !) pages that it has to look at. Related to this, not understanding how to use temp tables to hand off manipulations prior to updating large indexed tables.

Not having indexes on joins or commonly queried fields.

Not including fields in indexes when you are always seeking and return the same value (for instance formatted name for an employee when you look up using EmployeeID) causing two actions instead of one.

dan
A: 

I've never managed to find much useful information on breaking up large queries, in Oracle it seems your advised more to keep everything together in one query rather than use temp tables. You can also get redo log issues if your temp table is holding a lot of data. I'd like to know more/get some links?

A: 

Clauses to avoid (if possible) :

  • DISTINCT / UNIQUE
  • UNION
  • GROUP BY
  • ORDER BY
Lluis Martinez