views:

2194

answers:

6

I'm reading data from a device which measures distance. My sample rate is high so that I can measure large changes in distance (i.e. velocity) but this means that, when the velocity is low, the device delivers a number of measurements which are identical (due to the granularity of the device). This results in a 'stepped' curve.

What I need to do is to smooth the curve in order to calculate the velocity. Following that I then need to calculate the acceleration.

How to best go about this?

(Sample rate up to 1000Hz, calculation rate of 10Hz would be ok. Using C# in VS2005)

+7  A: 

You need a smoothing filter, the simplest would be a "moving average": just calculate the average of the last n points.

The question here is, how to determine n, can you tell us more about your application?

(There are other, more complicated filters. They vary on how they preserve the input data. A good list is in Wikipedia)

Edit!: For 10Hz, average the last 100 values.

moogs
corrected. thanks!
moogs
Could you please explain why to use 100 values for 10Hz ? Thanks.
Guido
@Guido - from my interpretation of what he said, he has 1000values per second, but only really needs to get data 10 times a second. So, a simple way is to just treat 100 values as one (get the average).
moogs
I think the wording could be improved here... from your last comment I think the idea you are getting at is that you get 100 values, average them, get another 100 values, average them, ad infinitum to get 10hz averaged samples from 1000hz input.
workmad3
A: 

You could use a moving average to smooth out the data.

Lee
+13  A: 

The wikipedia entry from moogs is a good starting point for smoothing the data. But it does not help you in making a decision.

It all depends on your data, and the needed processing speed.

Moving Average Will flatten the top values. If you are interrested in the minimum and maximum value, don't use this. Also I think using the moving average will influence your measurement of the acceleration, since it will flatten your data (a bit), thereby acceleration will appear to be smaller. It all comes down to the needed accuracy.

Savitzky–Golay Fast algorithm. As fast as the moving average. That will preserve the heights of peaks. Somewhat harder to implement. And you need the correct coefficients. I would pick this one.

Kalman filters If you know the distribution, this can give you good results (it is used in GPS navigation systems). Maybe somewhat harder to implement. I mention this because I have used them in the past. But they are probably not a good choice for a starter in this kind of stuff.

The above will reduce noise on your signal.

Next you have to do is detect the start and end point of the "acceleration". You could do this by creating a Derivative of the original signal. The point(s) where the derivative crosses the Y-axis (zero) are probably the peaks in your signal, and might indicate the start and end of the acceleration.

You can then create a second degree derivative to get the minium and maximum acceleration itself.

GvS
If you model things the right way, the Kalman filter will give you velocity and acceleration directly.
Peter K.
a good code example for Savitzky-Golay can be found on http://www.procoders.net/?p=11
Wouter
SG is trivial to implement once you have the coefficients. Just use R's sgolay to find them.
Paul
A: 

In addition to the above articles, have a look at Catmull-Rom Splines.

James Fassett
Catmull-Roms are great curves for regularly spaced data but tend to highlight rather than smooth noise. I'd be more tempted to use cubic or quadratic splines, even though they don't pass through all the data points.
Shane MacLaughlin
A: 

In addition to GvSs excellent answer above you could also consider smoothing / reducing the stepping effect of your averaged results using some general curve fitting such as cubic or quadratic splines.

Shane MacLaughlin
+1  A: 

Moving averages are generally terrible - but work well for white noise. Both moving averages & Savitzky-Golay both boil down to a correlation - and therefore are very fast and could be implemented in real time. If you need higher order information like first and second derivatives - SG is a good right choice. The magic of SG lies in the constant correlation coefficients needed for the filter - once you have decided the length and degree of polynomial to fit locally, the coefficients need only to be found once. You can compute them using R (sgolay) or Matlab.

You can also estimate a noisy signal's first derivative via the Savitzky-Golay best-fit polynomials - these are sometimes called Savitzky-Golay derivatives - and typically give a good estimate of the first derivative.

Kalman filtering can be very effective, but it's heavier computationally - it's hard to beat a short convolution for speed!

Paul
CenterSpace Software

Paul