views:

58

answers:

4

I have a read only database (product) that recides on its own Sql Server 2008.

I already optimized queries by looking at most expensive queries in activity monitor - report. I ordered the report by CPU-cost. I now have something like 50 queries/second and no query is longer than 300ms.

CPU-Time is ok (30%) and Memory is only used by 20% (out of 64GB).

There is one issue: disk time is at steady 100% (I looked at idle time performance counter and used ideras SQL diagnostic manager). I can see that the product db behaves different than my order db which is on a different machine and has smaller tables: If I look at a profiler trace I have queries in product-db that show a value in column "read" higher than 50.000. In my order DB these values are never higher than 1000. The queries in product-db use a lot of Common table expressions, work on large tables (some are around 5 Million entries).

I am not shure if I should invest time in optimizing queries for i/o performance or if I should just add a server. By otimizing for query duration I already added the missing indexes. Is optimizing for i/o something that is usually done?

+4  A: 

There's always a next bottleneck.

they say.

Now that you've tuned CPU usage, it's only natural that I/O load emerges as dominant. Is your performance already acceptable? If yes stop, if no you have to estimate how many hours you will have to invest in further tuning and if buying another server or more hard disks might be cheaper.

Regarding the I/O tuning again, try to see what you can achieve with easy measures. Sometimes you can trade CPU for I/O and vice versa. Compression is an example for this. You would then tune that component that is your current bottlneck.

Before you seek to make the I/O faster try to reduce the I/O that is generated.

Peter G.
+1  A: 

Look for obvious IO performance improvements for your query, but more importantly, look at how you can improve your IO performance at the server level.

If your other resources (CPU and memory) aren't overloaded, you probably don't need a new server. Consider adding an SSD for logs and temp files, and/or consider if you can affordably fit your whole DB onto an array of SSDs.

Of course, clearing out your disk IO bottleneck is likely to raise CPU usage, but if your performance is close to acceptable, this will probably improve things to the point that you can stop optimizing for now.

Paul McMillan
+5  A: 

In short, yes. Optimize for both CPU and IO.

Queries with high CPU tend to be doing unnecessary in-memory sorts, (sometimes inefficient) hash joins, or complex logic.

Queries with high IO (Page Reads) tend to be doing full table scans or working in other inefficient ways.

9 times out of 10, the same queries will be near the top of the list, but if you've worked on the high CPU and you still are unhappy with performance, then by all means, work on the high IO procs next.

BradC
A: 

Unless you are using SSDs or a DB optimized SAN then IO is almost always the limit in database applications.

So yes, optimize to get rid of it as much as possible.

Table indexes are the first thing to do.

Then, add as much RAM as you possibly can, up to the complete size of your DB files.

Then partition your data tables (if that is a reasonable thing to do) so that any necessary table or index scans are done on only one or two table partitions.

Then I suppose you either buy bigger machines with even more RAM and/or buy SSDs or a SAN or a SAN with SSDs.

Alternatively you rebuild your entire database application to use something like NoSQL or database sharding, and implement all your relations, joins, constraints, etc in a middle interface layer.

Zan Lynx