views:

122

answers:

1

This question is about a whole class of similar problems, but I'll ask it as a concrete example.

I have a server with a file system whose contents fluctuate. I need to monitor the available space on this file system to ensure that it doesn't fill up. For the sake of argument, let's suppose that if it fills up, the server goes down.

It doesn't really matter what it is -- it might, for example, be a queue of "work".

During "normal" operation, the available space varies within "normal" limits, but there may be pathologies:

  • Some other (possibly external) component that adds work may run out of control
  • Some component that removes work seizes up, but remains undetected

The statistical characteristics of the process are basically unknown.

What I'm looking for is an algorithm that takes, as input, timed periodic measurements of the available space (alternative suggestions for input are welcome), and produces as output, an alarm when things are "abnormal" and the file system is "likely to fill up". It is obviously important to avoid false negatives, but almost as important to avoid false positives, to avoid numbing the brain of the sysadmin who gets the alarm.

I appreciate that there are alternative solutions like throwing more storage space at the underlying problem, but I have actually experienced instances where 1000 times wasn't enough.

Algorithms which consider stored historical measurements are fine, although on-the-fly algorithms which minimise the amount of historic data are preferred.


I have accepted Frank's answer, and am now going back to the drawing-board to study his references in depth.

There are three cases, I think, of interest, not in order:

  1. The "Harrods' Sale has just started" scenario: a peak of activity that at one-second resolution is "off the dial", but doesn't represent a real danger of resource depletion;
  2. The "Global Warming" scenario: needing to plan for (relatively) stable growth; and
  3. The "Google is sending me an unsolicited copy of The Index" scenario: this will deplete all my resources in relatively short order unless I do something to stop it.

It's the last one that's (I think) most interesting, and challenging, from a sysadmin's point of view..

+1  A: 

If it is actually related to a queue of work, then queueing theory may be the best route to an answer.

For the general case you could perhaps attempt a (multiple?) linear regression on the historical data, to detect if there is a statistically significant rising trend in the resource usage that is likely to lead to problems if it continues (you may also be able to predict how long it must continue to lead to problems with this technique - just set a threshold for 'problem' and use the slope of the trend to determine how long it will take). You would have to play around with this and with the variables you collect though, to see if there is any statistically significant relationship that you can discover in the first place.

Although it covers a completely different topic (global warming), I've found tamino's blog (tamino.wordpress.com) to be a very good resource on statistical analysis of data that is full of knowns and unknowns. For example, see this post.

edit: as per my comment I think the problem is somewhat analogous to the GW problem. You have short term bursts of activity which average out to zero, and long term trends superimposed that you are interested in. Also there is probably more than one long term trend, and it changes from time to time. Tamino describes a technique which may be suitable for this, but unfortunately I cannot find the post I'm thinking of. It involves sliding regressions along the data (imagine multiple lines fitted to noisy data), and letting the data pick the inflection points. If you could do this then you could perhaps identify a significant change in the trend. Unfortunately it may only be identifiable after the fact, as you may need to accumulate a lot of data to get significance. But it might still be in time to head off resource depletion. At least it may give you a robust way to determine what kind of safety margin and resources in reserve you need in future.

frankodwyer
+1 Frank, thanks for your thoughtful reaction. I've explored queuing theory; and statistical smoothing, Kalman filtering,and so forth. But the big question that remains is how to distinguish between a sudden spurt of activity and a pending disaster.
Brent.Longborough
BTW, super link to article on Tamino's blog
Brent.Longborough
I think the problem is analogous to the problems that tamino deals with. A sudden spurt in activity is like 'changes in weather' (short term noise) and what you want to identify is like 'changes in climate' (a longer term trend). So there may be techniques you can carry over from the GW problem.
frankodwyer
tamino also describes a technique that might be relevant - but unfortunately I cannot find the post I'm thinking of. The comment length is a little too short to describe so I will add to my answer.
frankodwyer