I think that this problem is pretty much unsolvable, but it is possible to create some accurate estimations with a bit more knowledge of the process that is executing. And in the cases where there are large unknowns it is better to inform the user of those unknowns so that they can take them into account.
To take the simple example of downloading a batch of files you have two known variables:
- The number of files
- The size of the files
For each file there is a constant overhead (the time it takes to establish a connection, and the time it takes to open a file on the file system). There is also the obvious download time associated with the size of the files. Creating a function that can express this as time remaining in terms of the current download speed is easy, and accurate provided the downlaod speed doesnt fluctuate too much. But there lies the problem.
With an accurate model of the operation you are performing it is easy to predict how long it will take provided there are no outside influences. And that is rarely possible.
However you could go for a solution that attempts to understand and explain these outside influences. The user may find it helpful to be alerted when the speed changes dramatically as they can adjust their plans to fit with the new ETA. It may also be helpful to explain what factors are affecting the current operation. eg
Your download will complete in 6 minutes, if the download speed stays at 50k/s
This allows the user to make some educated guesses if they know that speeds are likely to change. And ultimately leads to less frustrations.