views:

1030

answers:

5

I'm wondering what is the recommended audio library to use?

I'm attempting to make a small program that will aid in tuning instruments. (Piano, Guitar, etc.). I've read about ALSA & Marsyas audio libraries.

I'm thinking the idea is to sample data from microphone, do analysis on chunks of 5-10ms (from what I've read). Then perform a FFT to figure out which frequency contains the largest peak.

+2  A: 

ALSA is sort of the default standard for linux now by virtue of the kernel drivers being included in the kernel and OSS being depreciated. However there are alternatives to ALSA userspace, like jack, which seems to be aimed at low-latency professional type applications. It's API seems to have a nicer API, although I've not used it, my brief exposure to the ALSA API would make me think that almost anything would be better.

Steve Baker
+3  A: 

This guide should help. Don't use ALSA for your application. Use a higher level API. If you decide you'd like to use JACK, http://jackaudio.org/applications has three instrument tuners you can use as example code.

joeforker
A: 

Audacity includes a frequency plot feature and has built-in FFT filters.

rkb
+2  A: 

Marsyas would be a great choice for doing this, it's built for exactly this kind of task.

For tuning an instrument, what you need to do is to have an algorithm that estimates the fundamental frequency (F0) of a sound. There are a number of algorithms to do this, one of the newest and best is the YIN algorithm, which was developed by Alain de Cheveigne. I recently added the YIN algorithm to Marsyas, and using it is dead simple.

Here's the basic code that you would use in Marsyas:

  MarSystemManager mng;

  // A series to contain everything
  MarSystem* net = mng.create("Series", "series");

  // Process the data from the SoundFileSource with AubioYin
  net->addMarSystem(mng.create("SoundFileSource", "src"));
  net->addMarSystem(mng.create("ShiftInput", "si"));
  net->addMarSystem(mng.create("AubioYin", "yin"));

  net->updctrl("SoundFileSource/src/mrs_string/filename",inAudioFileName);

  while (net->getctrl("SoundFileSource/src/mrs_bool/notEmpty")->to<mrs_bool>()) {
    net->tick();
    realvec r = net->getctrl("mrs_realvec/processedData")->to<mrs_realvec>();
    cout << r(0,0) << endl;
  }

This code first creates a Series object that we will add components to. In a Series, each of the components receives the output of the previous MarSystem in serial. We then add a SoundFileSource, which you can feed in a .wav or .mp3 file into. We then add the ShiftInput object which outputs overlapping chunks of audio, which are then fed into the AubioYin object, which estimates the fundamental frequency of that chunk of audio.

We then tell the SoundFileSource that we want to read the file inAudioFileName.

The while statement then loops until the SoundFileSource runs out of data. Inside the while loop, we take the data that the network has processed and output the (0,0) element, which is the fundamental frequency estimate.

This is even easier when you use the Python bindings for Marsyas.

sness
A: 

http://clam-project.org/ CLAM is a full-fledged software framework for research and application development in the Audio and Music Domain. It offers a conceptual model as well as tools for the analysis, synthesis and processing of audio signals.

They have a great API, nice GUI and a few finished apps where you can see everything.

dnigmig