tags:

views:

95

answers:

2

I have a encoder which encodes a speech file(.wav) that i give as input. Now what i want to do is to write a program such that i can speak in the mic and at the same time the encoder can process it. Basically i want to record and process a speech signal in real time (a small delay can be tolerated). To do this i was thinking of making a loop inside which i would first record the speech for say 1 sec in a file say speech.in, then i would copy this file to temp and pass this temp to the encoder. In the meantime the recorder should overwrite the speech.in file and save the next 1 sec of data in it.And continue this loop...

The problem i am having is i cant write a program to control the recorder to do the thing i want. Is there any recorder which can be easily controlled or any code to do it ?

This is the only way i could think of to implement this. Any other(hopefully better) solution is also welcome.

*edit: I am working on Ubuntu 10.04 but i have used the same program on windows as well so any suggestion on either platform is welcome

A: 

Sounds like this would be best served by threading.

Here is a MSDN link

Darknight
If the OS in questions supports it, if the hardware doesn't have special provisions for exactly this task...
dmckee
I am working on ubuntu 10.04
Mancunia
and threads will help to do maybe both recording and encoding at the same time but my main problem is still how can i pass the file which is still being written in by the recorder to the encoder
Mancunia
what, why do you need to store the file? can you not stream directly in memory? thus avoid I/O bottlenecks? why the down vote?
Darknight
+1  A: 

Your proposed way is not the way to go. At least, this is not how it's done on Windows and Mac. (I don't know how linux flavoured machines would do it but I'm guessing the methodology is the same)

You'll have to open the audio device, and allocate a set of (say 4) internal memorybuffers (length of 100ms sound would suffice, but you'll have to experiment how small you can get the buffer (the smaller, the less latency, but the more chances on audio glitches)). You attach these to the audio device and ask for a callback when any of these buffers are filled. When you get the first call back, make sure you encode the buffer quickly enough before the 1st buffer is used again by the audiodevice and is overwritten with new data.

You could simultaneously output the encoded sound to the audiodevice again. The latency would be similar to the length of 1 of the buffers.

Toad
Well i could try to do it on windows but i dont understand what you mean by allocating buffers. Isnt there a fixed default buffer in the audio device? Is there any command to change it ?
Mancunia
I'm not saying you should use windows, just explaining how it works on this for me familiar platform. I don't know linux platforms well enough.
Toad