views:

1004

answers:

3

Using my iPhone, I would like to measure human flatulence in order to measure, quantify, and provide a statistical report based on various properties of the overall event quality. Outrageous, maybe. Fun, definitely. If I'm going to "release" an iPhone app, I want to do it in style. That's right, I want to measure farts/stinkers/toots.

So that bring's me to my question,

In order to provide an extremely accurate analysis, at a very minimum, I would need to be able to measure a propagating wave packet, specifically one that would measure the envelope of a burst, distance between adjacent peaks, momentum, and velocity.

A propagating Wave Packet

A propagating Wave Packet

As I am no sound expert of sound analysis, I was wondering if there are development libraries available for sound analysis that would provide a robust set of tools with the qualifications as I mentioned above?

For those scope-creepers out there, your ideas are welcome, however, leave ideas only as comments, as I am seriously looking for an answer to my issue.

Note I am aware of other flatulence measuring applications, which measure purely on the volume and length of the event, but none that would provide the level of quality I am looking for in this ground/wind-breaking application.

Note 2 I'm absolutely, 100%, serious.

+6  A: 

I think there is some merit in this idea. There are already apps that measure wind speed using the iPhone microphone, and they are apparently quite accurate.

You could also incorporate face-recognition and human expression-analysis capabilities into this program in order to incorporate an environmental-impact assessment into you over all metric.

In a future release, you could measure the speed and acceleration with which other iPhone users are departing from the event instance, in order to determine an effective blast radius and strength.

Edit:

Since this is a project I think many people could get behind, I did a little more digging around. There is source code available for an iPhone application called aurioTouch, that seems to have most of what you want.

> The code uses: the AU Remote IO audio
> unit to get the audio input and copy
> it to the output the UI presents:
> - Oscilloscope view of the audio
> - time domain
> - frequency domain
> - Scrolling sonogram of the audio

There is some additional info on StackOverflow related to getting this working here:

http://stackoverflow.com/questions/312089/auriotouch-sample-apps-audio-playback-thru-not-working http://stackoverflow.com/questions/1447059/auriotouch-fft-for-an-instrument-tuner

Good luck, and may the wind be at your back!

RedFilter
*blink*I just had a vision of the movie "The Dark Knight"... Made reality.
Trevoke
@Trevoke: Really? I was more reminded of The Spleen from "Mystery Men".
David Thornley
"In a future release" – that's about the app, right?
Perspx
+1 for the environmental-impact assessment through face recognition.
Pekka
See updated asnwer, added some info re aurioTouch
RedFilter
"a project people can get behind"...how utterly droll ;)
AJ
+9  A: 

Take a look into FMOD and OpenAL

  • fmod.org
  • en.wikipedia.org/wiki/FMOD

  • connect.creativelabs.com/openal/default.aspx

  • en.wikipedia.org/wiki/OpenAL

Being written in c/c++, both of these libraries can easily be linked against standard iPhone code to be compiled against the ARM architecture of the iPhone.

They are both capable of extracting the information you require from the audio stream of the iPhone's microphone via the APIs provided by Apple.

pyrotechnick
Noted. I'll definately check that out.
George
A: 

You could implement the FFT algorithm to find the "pitch" of the emission.

PeanutPower