views:

2960

answers:

4

My goal is to recognize simple gestures from accelerometers mounted on a sun spot. A gesture could be as simple as rotating the device or moving the device in several different motions. The device currently only has accelerometers but we are considering adding gyroscopes if it would make it easier/more accurate.

Does anyone have recommendations for how to do this? Any available libraries in Java? Sample projects you recommend I check out? Papers you recommend?

The sun spot is a Java platform to help you make quick prototypes of systems. It is programmed using Java and can relay commands back to a base station attached to a computer. If I need to explain how the hardware works more leave a comment.

+9  A: 

The accelerometers will be registering a constant acceleration due to gravity, plus any acceleration the device is subjected to by the user, plus noise.

You will need to low pass filter the samples to get rid of as much irrelevant noise as you can. The worst of the noise will generally be higher frequency than any possible human-induced acceleration.

Realise that when the device is not being accelerated by the user, the only force is due to gravity, and therefore you can deduce its attitude in space. Moreover, when the total acceleration varies greatly from 1g, it must be due to the user accelerating the device; by subtracting last known estimate of gravity, you can roughly estimate in what direction and by how much the user is accelerating the device, and so obtain data you can begin to match against a list of known gestures.

With a single three-axis accelerometer you can detect the current pitch and roll, and also acceleration of the device in a straight line. Integrating acceleration minus gravity will give you an estimate of current velocity, but the estimate will rapidly drift away from reality due to noise; you will have to make assumptions about the user's behaviour before / between / during gestures, and guide them through your UI, to provide points where the device is not being accelerated and you can reset your estimates and reliably estimate the direction of gravity. Integrating again to find position is unlikely to provide usable results over any useful length of time at all.

If you have two three-axis accelerometers some distance apart, or one and some gyros, you can also detect rotation of the device (by comparing the acceleration vectors, or from the gyros directly); integrating angular momentum over a couple of seconds will give you an estimate of current yaw relative to that when you started integrating, but again this will drift out of true rapidly.

moonshadow
Thanks alot this was really helpful.
smaclell
Do they really register a constant acceleration due to gravity? I mean logically they should, but it just seems counter-intuitive :-)
Orion Edwards
Yes, they do. Think of them as masses on springs, with the "acceleration" being reported actually the amount of stretching / compression; then it's quite intuitive.
moonshadow
And a simple calibration routine is to sit them on a table, measure then invert them; rotate on x,y,z 90 degrees as well and you're halfway calibrated.
Tim Williscroft
+1  A: 

Adding to moonshadow's point about having to reset your baseline for gravity and rotation...

Unless the device is expected to have stable moments of rest (where the only force acting on it is gravity) to reset its measurement baseline, your system will eventually develop an equivalent of vertigo.

Toybuilder
Due to sensor drift? or just stall old values?
smaclell
Basically, you want the system to trim out sensor drift over temperature and time. But that trim-out can't happen if you don't have a quiet time.
Toybuilder
+2  A: 

What hasn't been mentioned yet is the actual gesture recognition. This is the hard part. After you have cleaned up your data (low pass filtered, normalized, etc) you still have most of the work to do.

Have a look at Hidden Markov Models. This seems to be the most popular approach, but using them isn't trivial. There is usually a preprocessing step. First doing STFT and clustering the resultant vector into a dictionary, then feeding that into a HMM. Have a look at jahmm in google code for a java lib.

Zaph0d42
Thanks for taking a stab. The Project was for school and went fairly well. For actual gesture recognition we ended up using a variant of the $1 Recognizer that did not care about rotation and had an extra dimension. It is a template based method which does not perform any real training on the data at all. To simplify it we did not do any segmentation of gestures and instead used a "switch" to indicate when a gesture started/stopped. Our method had very good accuracy/performance given around 5 templates per gesture had over 90% accuracy in the field with a sub millisecond compute time.
smaclell
A: 

thanks a lot i learned

asa