views:

238

answers:

2

How would your determine a max value to set an alarm on the I/O activity of a ubuntu/linux server hosting upto 4 sites running apache, mysql and upto 4 tomcats? Or in general (if there is one such) what is a suitable operational range for I/O read/write per second in a production server with a 7.2K SATA HDD?

What I'm trying to understand is the impact on the disks if read/writes per second peaks. What should I look for in the specification of the disks/drives to determine this?

As you can see, I'm "clearly" confused. Any help/direction would be appreciated...

+1  A: 

How would your determine a max value to set an alarm on the I/O activity of a ubuntu/linux server hosting upto 4 sites running apache, mysql and upto 4 tomcats?

The value at which the expected problems you're alarming about are more expensive than the price of you having to pay attention to the alarm.

What number is that? That depends on a lot of things, including:

Which problems are you trying to avoid?

Do you worry about performance? If so, do you worry more about latency or throughput? How's the tradeoff between interactive and batch-job performance?

Do you worry about wear-and-tear and the lifespan of the media? Do you worry about how often you have to restore backups?

Do you worry about the price of the disks? How much value is better disks going to bring to your operation?

How much can the writes be deferred? How much reading is preventable through caching? How lax can you be with respect to independence (the I in ACID)?

If you really want the best disk for your situation, these are some of the questions you probably want to ask yourself. If I were in your situation, I'd probably pick a random disk from the low to low-mid price range, and then see how it works out. Then you'll have experience to learn from so you know what to do differently next time (if anything) and it's not going to cost you much.

Jonas Kölker
A: 

Beware of linux, You will have some data cached in memory (and it's cool because it's fast, but this will break Your benchmarking).

IMHO You shouldn't test the hdd throughput, as if the data will be accumulated closely on the physical disc, You will have more, fragmentation of data impacts result too... Use some other metric.

In top, You have a "0.0%wa" statistic that tells You how much the processor is spending on waiting for data. If this gets high, You are in trouble (You could make a RAID array to increase the throughput). I don't know how top gets this info, but I bet You could get it too.

Reef