A: 

I don't sweat it, it's a very small part of an application. I tell them what's going on, and let them go do something else.

altCognito
+2  A: 

Your question is a good one. If the problem can be broken up into discreet units having an accurate calculation often works best. Unfortunately this may not be the case even if you are installing 50 components each one might be 2% but one of them can be massive. One thing that I have had moderate success with is to clock the cpu and disk and give a decent estimate based on observational data. Knowing that certain check points are really point x allows you some opportunity to correct for environment factors (network, disk activity, CPU load). However this solution is not general in nature due to its reliance on observational data. Using ancillary data such as rpm file size helped me make my progress bars more accurate but they are never bullet proof.

ojblass
+2  A: 

In certain instances, when you need to perform the same task on a regular basis, it might be a good idea of using past completion times to average against.

For example, I have an application that loads the iTunes library via its COM interface. The size of a given iTunes library generally do not increase dramatically from launch-to-launch in terms of the number of items, so in this example it might be possible to track the last three load times and load rates and then average against that and compute your current ETA.

This would be hugely more accurate than an instantaneous measurement and probably more consistent as well.

However, this method depends upon the size of the task being relatively similar to the previous ones, so this would not work for a decompressing method or something else where any given byte stream is the data to be crunched.

Just my $0.02

Eric Smith
+6  A: 

Original Answer

The company that created this site apparently makes a scheduling system that answers this question in the context of employees writing code. The way it works is with Monte Carlo simulation of future based on the past.

Appendix: Explanation of Monte Carlo

This is how this algorithm would work in your situation:

You model your task as a sequence of microtasks, say 1000 of them. Suppose an hour later you completed 100 of them. Now you run the simulation for the remaining 900 steps by randomly selecting 90 completed microtasks, adding their times and multiplying by 10. Here you have an estimate; repeat N times and you have N estimates for the time remaining. Note the average between these estimates will be about 9 hours -- no surprises here. But by presenting the resulting distribution to the user you'll honestly communicate to him the odds, e.g. 'with the probability 90% this will take another 3-15 hours'

This algorithm, by definition, produces complete result if the task in question can be modeled as a bunch of independent, random microtasks. You can gain a better answer only if you know how the task deviates from this model: for example, installers typically have a download/unpacking/installing tasklist and the speed for one cannot predict the other.

Appendix: Simplifying Monte Carlo

I'm not a statistics guru, but I think if you look closer into the simulation in this method, it will always return a normal distribution as a sum of large number of independent random variables. Therefore, you don't need to perform it at all. In fact, you don't even need to store all the completed times, since you'll only need their sum and sum of their squares.

In maybe not very standard notation,

sigma = sqrt ( sum_of_times_squared-sum_of_times^2 )
scaling = 900/100          // that is (totalSteps - elapsedSteps) / elapsedSteps
lowerBound = sum_of_times*scaling - 3*sigma*sqrt(scaling)
upperBound = sum_of_times*scaling + 3*sigma*sqrt(scaling)

With this, you can output the message saying that the thing will end between [lowerBound, upperBound] from now with some fixed probability (should be about 95%, but I probably missed some constant factor).

ilya n.
+1 for saying Monte Carlo. (=
sybreon
http://www.joelonsoftware.com/items/2007/10/26.htmlheres the link to the blog entry you referenced
Soldier.moth
Also, I was just wondering why somebody marked the answer (before this expansion) down... No offense, just how can I make this post better?
ilya n.
plus one for going the extra mile and thinking about the problem.
ojblass
This is a bootstrap estimate.
piccolbo
+1  A: 

I always wish these things would tell me a range. If it said, "This task will most likely be done in between 8 min and 30 minutes," then I have some idea of what kind of break to take. If it's bouncing all over the place, I'm tempted to watch it until it settles down, which is a big waste of time.

Nosredna
+2  A: 

I usually use an Exponential Moving Average to compute the speed of an operation with a smoothing factor of say 0.1 and use that to compute the remaining time. This way all the measured speeds have influence on the current speed, but recent measurements have much more effect than those in the distant past.

In code it would look something like this:

alpha = 0.1 # smoothing factor
...
speed = (speed * (1 - alpha)) + (currentSpeed * alpha)

If your tasks are uniform in size, currentSpeed would simply be the time it took to execute the last task. If the tasks have different sizes and you know that one task is supposed to be i,e, twice as long as another, you can divide the time it took to execute the task by its relative size to get the current speed. Using speed you can compute the remaining time by multiplying it by the total size of the remaining tasks (or just by their number if the tasks are uniform).

Hopefully my explanation is clear enough, it's a bit late in the day.

gooli
Something along these lines is a good idea. But it likely could cause big problems if your update ticks are irregularly spaced. Perhaps make the "alpha" smoothing factor be a function of the time since the last update, like alpha=exp(-C*TimeSinceLastUpdate))? And maybe C should vary itself based on percentage of competion?
SPWorley
+2  A: 

Here's what I've found works well! For the first 50% of the task, you assume the rate is constant and extrapolate. The time prediction is very stable and doesn't bounce much.

Once you pass 50%, you switch computation strategy. You take the fraction of the job left to do (1-p), then look back in time in a history of your own progress, and find (by binary search and linear interpolation) how long it's taken you to do the last (1-p) percentage and use that as your time estimate completion.

So if you're now 71% done, you have 29% remaining. You look back in your history and find how long ago you were at (71-29=42%) completion. Report that time as your ETA.

This is naturally adaptive. If you have X amount of work to do, it looks only at the time it took to do the X amount of work. At the end when you're at 99% done, it's using only very fresh, very recent data for the estimate.

It's not perfect of course but it smoothly changes and is especially accurate at the very end when it's most useful.

SPWorley
A: 

First off, it helps to generate a running moving average. This weights more recent events more heavily.

To do this, keep a bunch of samples around (circular buffer or list), each a pair of progress and time. Keep the most recent N seconds of samples. Then generate a weighted average of the samples:

totalProgress += (curSample.progress - prevSample.progress) * scaleFactor
totalTime += (curSample.time - prevSample.time) * scaleFactor

where scaleFactor goes linearly from 0...1 as an inverse function of time in the past (thus weighing more recent samples more heavily). You can play around with this weighting, of course.

At the end, you can get the average rate of change:

 averageProgressRate = (totalProgress / totalTime);

You can use this to figure out the ETA by dividing the remaining progress by this number.

However, while this gives you a good trending number, you have one other issue - jitter. If, due to natural variations, your rate of progress moves around a bit (it's noisy) - e.g. maybe you're using this to estimate file downloads - you'll notice that the noise can easily cause your ETA to jump around, especially if it's pretty far in the future (several minutes or more).

To avoid jitter from affecting your ETA too much, you want this average rate of change number to respond slowly to updates. One way to approach this is to keep around a cached value of averageProgressRate, and instead of instantly updating it to the trending number you've just calculated, you simulate it as a heavy physical object with mass, applying a simulated 'force' to slowly move it towards the trending number. With mass, it has a bit of inertia and is less likely to be affected by jitter.

Here's a rough sample:

// desiredAverageProgressRate is computed from the weighted average above
// m_averageProgressRate is a member variable also in progress units/sec
// lastTimeElapsed = the time delta in seconds (since last simulation) 
// m_averageSpeed is a member variable in units/sec, used to hold the 
// the velocity of m_averageProgressRate


const float frictionCoeff = 0.75f;
const float mass = 4.0f;
const float maxSpeedCoeff = 0.25f;

// lose 25% of our speed per sec, simulating friction
m_averageSeekSpeed *= pow(frictionCoeff, lastTimeElapsed); 

float delta = desiredAvgProgressRate - m_averageProgressRate;

// update the velocity
float oldSpeed = m_averageSeekSpeed;
float accel = delta / mass;    
m_averageSeekSpeed += accel * lastTimeElapsed;  // v += at

// clamp the top speed to 25% of our current value
float sign = (m_averageSeekSpeed > 0.0f ? 1.0f : -1.0f);
float maxVal = m_averageProgressRate * maxSpeedCoeff;
if (fabs(m_averageSeekSpeed) > maxVal)
{
 m_averageSeekSpeed = sign * maxVal;
}

// make sure they have the same sign
if ((m_averageSeekSpeed > 0.0f) == (delta > 0.0f))
{
 float adjust = (oldSpeed + m_averageSeekSpeed) * 0.5f * lastTimeElapsed;

 // don't overshoot.
 if (fabs(adjust) > fabs(delta))
 {
    adjust = delta;
            // apply damping
    m_averageSeekSpeed *= 0.25f;
 }

 m_averageProgressRate += adjust;
}    
Scott S