A: 

I would leave the alpha value alone, and fill in the missing data.

Since you don't know what happens during the time when you can't sample, you can fill those samples with 0s, or hold the previous value stable and use those values for the EMA. Or some backward interpolation once you have a new sample, fill in the missing values, and recompute the EMA.

What I am trying to get at is you have an input x[n] which has holes. There is no way to get around the fact you are missing data. So you can use a zero order hold, or set it to zero, or some kind of interpolation between x[n] and x[n+M], where M is the number of missing samples and n the start of the gap. Possibly even using values before n.

freespace
From spending an hour or so mucking about a bit with the math for this, I think that simply varying the alpha will actually give me the proper interpolation between the two points that you talk about, but in a much simpler way. Further, I think that varying the alpha will also properply deal with samples taken between the standard sampling intervals. In other words, I'm looking for what you described, but trying to use math to figure out the simple way to do it.
Curt Sampson
I don't think there is such a beast as "proper interpolation". You simply don't know what happened in the time you are not sampling. Good and bad interpolation implies some knowledge of what you missed, since you need to measure against that to judge whether an interpolation is good or bad. Though that said, you can place constraints, i.e. with maximum acceleration, speed, etc. I think if you do know how to model the missing data, then you would just model the missing data, then apply the EMA algorithm with no change, rather than changing alpha. Just my 2c :)
freespace
This is exactly what I was getting at in my edit to the question 15 minutes ago: "You simply don't know what happened in the time you are not sampling," but that's true even if you sample at every designated interval. Thus my Nyquist contemplation: so long as you know the wave form doesn't change directions more than every couple of samples, the actual sample interval shouldn't matter, and should be able to vary. The EMA equation seems to me exactly to calculate as if the waveform changed linearly from the last sample value to the current one.
Curt Sampson
I don't think that is quite true. Nyquist's theorem requires requires minimum of 2 samples per period to be able to uniquely identify the signal. If you don't do that, you get aliasing. It would be the same as sampling as f_s1 for a time, then f_s2, then back to f_s1, and you get aliasing in the data when you sample with f_s2 if f_s2 is below the Nyquist limit. I also must confess I do not understand what you mean by "waveform changes linearly from last sample to current one". Could you please explain? Cheers,Steve.
freespace
Right. Assume my nominal sample rate is, say, 250 samples per period, but it might go down as low as a dozen samples per period. That still leaves me with a plenty high sampling frequency, I reckon.
Curt Sampson
I've updated the question to discuss the "linear" behaviour of an EMA.
Curt Sampson
A: 

This is similar to an open problem on my todo list. I have one scheme worked out to some extent but do not have mathematical working to back this suggestion yet.

Update & summary: Would like to keep the smoothing factor (alpha) independent of the compensation factor (which I refer as beta here). Jason's excellent answer already accepted here works great for me.

First step.

  • If you can also measure the time since the last sample was taken (in rounded multiples of your constant sampling time -- so 7.8 ms since last sample would be 8 units), that could be used to apply the smoothing multiple times. Apply the formula 8 times in this case. You have effectively made a smoothing biased more towards the current value.

Second step.

  • To get a better smoothing, we need to tweak the alpha while applying the formula 8 times in the previous case.

What will this smoothing approximation miss?

  • It has already missed 7 samples in the example above
  • This was approximated in step 1 with a flattened re-application of the current value an additional 7 times
  • If we define an approximation factor beta that will be applied along with alpha (as alpha*beta instead of just alpha), we will be assuming that the 7 missed samples were changing smoothly between the previous and current sample values.
nik
I did think about this, but a bit of mucking about with the math got me to the point where I believe that, rather than applying the formula eight times with the sample value, I can do a calculation of a new alpha that will allow me to apply the formula once, and give me the same result. Further, this would automatically deal with the issue of samples offset from exact sample times.
Curt Sampson
The single application is fine. What I am not sure about yet is how good is the approximation of the 7 missing values. If the continuous movement makes the value jitter a lot across the 8 milliseconds, the approximations may be quite off the reality. But, then if you are sampling at 1ms (highest resolution excluding the delayed samples) you have already figured the jitter within 1ms is not relevant. Does this reasoning work for you (I am still trying to convince myself).
nik
Oh, wait, are you saying that you can compute a new alpha constant that can be used always regardless of the delay in sampling? I feel that is unlikely.
nik
I'm saying that one can calculate a new alpha for any interval based on the reference alpha and the difference between the actual interval and the reference interval.
Curt Sampson
Right. That is the factor beta from my description. A beta factor would be computed based on the difference interval and the current and previous samples. The new alpha will be (alpha*beta) but it will be used only for that sample. While you seem to be 'moving' the alpha in the formula, I tend towards constant alpha (smoothing factor) and an independently computed beta (a tuning factor) that compensates for samples missed just now.
nik
+1  A: 
Curt Sampson
A: 
balpha
I would think I can interpolate my data: given that I'm sampling it at discrete intervals, I'm already doing so with a standard EMA! Anyway, assume that I need a "proof" that shows it works as well as a standard EMA, which also has will produce an incorrect result if the values are not changing fairly smoothly between sample periods.
Curt Sampson
But that's what I'm saying: If you consider the EMA an interpolation of your values, you're done if you leave alpha as it is (because inserting the most recent average as Y doesn't change the average). If you say you need something that "works as well as a standard EMA" -- what's wrong with the original? Unless you have more information about the data you're measuring, any local adjustments to alpha will be at best arbitrary.
balpha
So you're saying that changing from, say, 1 to 2 over 1 second or 10 seconds should have the same effect on a 100 second moving average?
Curt Sampson
If you fill in the missing values with the value of the current moving average, that's exactly what happens, because S_new = alpha * Y + (1-alpha) * S_old = alpha * S_old + (1-alpha) * S_old = S_old .
balpha
Right, which is why I believe you don't want to do it that way. Intuitively, a moving average does not consider the signal to have been constantly the previous average from t(n) to t(n+1), with a sudden change to the new sample at t(n+1), or it would have to change the average much less than it does, because the signal was at a different level from the previous average for only an infinitesimal length of time.
Curt Sampson
As an example, consider S0 = 1, Y0 = 2, alpha = 0.5. The new average after sample Y0, S1, is 1.5. That is a reasonable average if the signal moved steadily from 1 to 2 over the time period; it is not reasonable if the signal stayed at 1 until just before the time period finished, and then suddenly jumped to 2.
Curt Sampson
What you're describing is linear interpolation of the measured values. If you consider that appropriate, why don't you calculate the EMA at constant intervals, taking Y(t) = Y_n + (Y_n+1 - Y_n) * (t_n+1 - t_n) / ( t - t_n) where n and n+1 a the closest measurings before and after the time t = d*i where i is the interval an d a natural number?
balpha
A: 

Let's say we would like to make an exponential decaying average on a continuous function. However we don't have all the values of that function, only a few samples. This formula would make a weighted average of the samples that we do have with the weights they would have in the continuous average.

Multipliern = AlphaTimen-Timen-1

Sumn = Valn + Sumn-1*Multipliern

Countn = 1 + Countn-1*Multipliern

Avgn = Sumn/Countn

yairchu
Check http://stackoverflow.com/editing-help, http://stackoverflow.com/questions/31657/what-html-tags-are-allowed-on-stack-overflow
nik
You can also have a look at the source code of one of the posts: http://stackoverflow.com/revisions/2552efdd-a0ea-44af-92c0-1889c7d34e97/view-source
sth
I use HTML `sup` and `sub` tags to do superscripts and subscripts, and use a `*` a the beginning of an equation, with a blank line above and below.
Curt Sampson
+1  A: 
sth
This looks like more or less the solution I had in mind. Unfortunately, I can't quite follow the proof just now, but I'll sit down and look at this more closely in the next day or two.
Curt Sampson
+7  A: 
Jason S
Yes, this exactly solves my problem, which was basically to introduce delta-t into the equation. I greatly appreciate the extra implementation hints too, as well as the concise alternative description, "single-pole low-pass filter."
Curt Sampson