Exponential Moving Average Sampled at Varying Times

cjs picture cjs · Jun 21, 2009 · Viewed 17.1k times · Source

I have a continuous value for which I'd like to calculate an exponential moving average. Normally I'd just use the standard formula for this:

  • Sn = αY + (1-α)Sn-1

where Sn is the new average, α is the alpha, Y is the sample, and Sn-1 is the previous average.

Unfortunately, due to various issues I don't have a consistent sample time. I may know I can sample at the most, say, once per millisecond, but due to factors out of my control, I may not be able to take a sample for several milliseconds at a time. A likely more common case, however, is that I simple sample a bit early or late: instead of sampling at 0, 1 and 2 ms. I sample at 0, 0.9 and 2.1 ms. I do anticipate that, regardless of delays, my sampling frequency will be far, far above the Nyquist limit, and thus I need not worry about aliasing.

I reckon that I can deal with this in a more-or-less reasonable way by varying the alpha appropriately, based on the length of time since the last sample.

Part of my reasoning that this will work is that the EMA "interpolates linearly" between the previous data point and the current one. If we consider calculating an EMA of the following list of samples at intervals t: [0,1,2,3,4]. We should get the same result if we use interval 2t, where the inputs become [0,2,4], right? If the EMA had assumed that, at t2 the value had been 2 since t0, that would be the same as the interval t calculation calculating on [0,2,2,4,4], which it's not doing. Or does that make sense at all?

Can someone tell me how to vary the alpha appropriately? "Please show your work." I.e., show me the math that proves that your method really is doing the right thing.

Answer

Jason S picture Jason S · Jun 22, 2009

This answer based on my good understanding of low-pass filters ("exponential moving average" is really just a single-pole lowpass filter), but my hazy understanding of what you're looking for. I think the following is what you want:

First, you can simplify your equation a little bit (looks more complicated but it's easier in code). I'm going to use "Y" for output and "X" for input (instead of S for output and Y for input, as you have done).

Yn = αX + (1-α)Yn-1 → Yn = Yn-1 + α(X - Yn-1)

which codes to:

 Y += alpha * (X-Y);

Second, the value of α here is "equal" to 1-e-Δt/τ where Δt is the time between samples, and τ is the time constant of the low-pass filter. I say "equal" in quotes because this works well when Δt/τ is small compared to 1, and α = 1-e-Δt/τ ≈ Δt/τ. (But not too small: you'll run into quantizing issues, and unless you resort to some exotic techniques you usually need an extra N bits of resolution in your state variable S, where N = -log2(α). ) For larger values of Δt/τ the filtering effect starts to disappear, until you get to the point where α is close to 1 and you're basically just assigning the input to the output.

This should work properly with varying values of Δt (the variation of Δt is not very important as long as alpha is small, otherwise you will run into some rather weird Nyquist issues / aliasing / etc.), and if you are working on a processor where multiplication is cheaper than division, or fixed-point issues are important, precalculate ω = 1/τ, and consider trying to approximate the formula for α.

If you really want to know how to derive the formula

α = 1-e-Δt/τ

then consider its differential equation source:

Y + τ dY/dt = X

which, when X is a unit step function, has the solution Y = 1 - e-t/τ. For small values of Δt, the derivative can be approximated by ΔY/Δt, yielding

Y + τ ΔY/Δt = X

ΔY/Δt = (X-Y)/τ

ΔY = (X-Y)(Δt/τ) = α(X-Y)

and the "extrapolation" of α = 1-e-Δt/τ comes from trying to match up the behavior with the unit step function case.