I'm starting to create a proof of concept for an idea I have, and at this point, I need some guidance as to how I should begin.
I need to sample the microphone input, and process that signal in real-time (think Auto-Tune, but working live), as opposed to "recording" for a while.
What I'm doing is "kind of" a "mic input to MIDI converter", so it needs to respond quite fast.
I investigated a bit online, and apparently the way to go is either DirectSound or the WaveIn* API functions. Now, according to what I read, the WaveIn APIs will let me fill a buffer of a certain size, which is fine for recording and post-processing, but I'm wondering... How do I do real-time processing?
Do I use 10ms buffers and keep a circular 50ms or 100ms array myself, and I get a function that triggers the analysis every 10ms? (which has access to the latest 100ms of input, of which only 10ms are new)
Am I missing something here?
Also, how is this done with DirectSound? Does it give me any improved capabilities over the regular Win32 APIs?