views:

108

answers:

2

How many records are considered normal for a typical SQL sever database table? I mean, if some of the tables in database contain something like three or four million records, should I consider replacing the hardware, partitioning tables, etc? I have got a query which joins only two tables and has four conditions in its WHERE clause with an ORDER By. This query usually takes 3-4 secs to execute, but once in every 10 or 20 executions, it may take even longer (10 or 20 secs) to execute (I don't think this is related to parameter sniffing because I am recompiling the query every time). How can I improve my query to execute in less than a second? How can I know how it can be achieved? How can I know whether increasing the amount of RAM or Adding news Hard Drive or Increasing CUP Speed or even Improving indexes on tables would boost the performance? Any advice on this would be appreciated :)

A: 

Unless you're doing some heavy-weight joins, 3-4 million rows do not require any extraordinal hardware. I'd first investigated if there are appropriate indexes, if they are used correctly, etc.

Anton Gogolev
+1  A: 

4 million records is not a lot. Even Microsoft Access might manage that.

Even 3-4 seconds for a query is a long time. 95% of the time when you come across performance issues like this it comes down to:

  • Lack of appropriate indexes;
  • Poorly written query;
  • A data model that doesn't lend itself to writing performant queries;
  • Unparameterized queries thrashing the query cache;
  • MVCC disabled and you have long-running transactions that are blocking SELECTS (out of the box this is how SQL Server acts). See Better concurrency in Oracle than SQL Server? for more information on this.

None of which has anything to do with hardware.

Unless the records are enormous or the throughput is extremely high then hardware is unlikely to be the cause or solution to your problem.

cletus
Thank you very much for your answer! I will post my query on a separate question to discuss its performance. By the way, how many records are considered enormous in sql server?
Maysam
It depends on the size of the records and the database throughput. There's no fixed number. I don't think I'd really be concerned until it got to 10s or 100s of millions though.
cletus
The deifnition of a Very Large Database (VLDB) is subjective / debatable, Wiki lists a common benchmark as 1Tb or several billions of rows - given SQl Server has now loaded 1Tb of data in under 30 minutes (http://blogs.msdn.com/sqlperf/archive/2009/03/03/an-etl-world-record-revealed-finally.aspx) the definition looks slightly outdated.
Andrew
cletus, I have posted my query here: http://stackoverflow.com/questions/2086368/please-help-me-with-this-query-sql-server-2008 could you please take a look at it?
Maysam