views:

80

answers:

2

Ok, I tried all sorts of titles and they all failed (so if someone come up with a better title, feel free to edit it :P)

I have the following problem: I am using a API to access hardware, that I don't coded, to add libraries to that API I need to inherit from the API interface, and the API do everything.

I put in that API, a music generator library, the problem is that the mentioned API only call the music library when the buffer is empty, and ask for a hardcoded amount of data (exactly 1024*16 samples... dunno why).

This mean that the music generator library, cannot use all the CPU potential, while playing music, even if the music library is not keeping up, the CPU use remains low (like 3%), so in parts of the music that there are too complex stuff, the buffer underuns (ie: the soundcard plays the area in the buffer that is empty, because the music library function don't returned yet).

Tweaking the hardcoded number, would only make the software work in some machines, and not work in others, depending of several factors...

So I came up with two solutions: Hack the API with some new buffer logic, but I don't figured anything on that area.

Or the one that I actually figured the logic: Make the music library have its own thread, it will have its own separate buffer that it will fill all the time, when the API calls the music library for more data, instead of generating, it will plainly copy the data from that separate buffer to the soundcard buffer, and then resumes generate music.

My problem is that although I have several years of programming experience, I always avoided multi-threading, I don't know even where to start...

The question is: Can someone find another solution, OR point me into a place that will give me info on how to implement my threaded solution?

EDIT:

I am not READING files, I am GENERATING, or CALCULATING, the music, got it? This is NOT a .wav or .ogg library. This is why I mentioned CPU time, if I could use 100% CPU, I would never get a underrun, but I can only use CPU in the short time between the program realizing that the buffer is reaching the end, and the actual end of the buffer, and this time sometimes is less than the time the program takes to calculate the music.

+2  A: 

I believe that the solution with separate thread that will prepare data for the library so that it is ready when requested is the best way to reduce latency and solve this problem. One thread generates music data and stores it in the buffer, and the APIs thread is getting data from that buffer when it needs it. In this case you need to synchronize access to the buffer whether you are reading or writing and make sure that you don't have too big buffer in those cases when API is too slow. To implement this, you need a thread, mutex and condition primitives from threading library and two flags - one to indicate when stop is requested and another one to ask thread to pause filling the buffer if API cannot keep up and it is getting too big. I'd recommend using Boost Thread library for C++, here are some useful articles with examples that comes to mind:

Vlad Lazarenko
Unfortunately your response was too late, I had already implemented a pthread solution... But it is a good reply anyway
speeder
A: 

You don't necessarily need a new thread to solve this problem. Your operating system may provide an asynchronous read operation; for example, on Windows, you would open the file with the FILE_FLAG_OVERLAPPED to make any operations on it asynchronous.

If your operating system does support this functionality, you could make a large buffer that can hold a few calls worth of data. When the application starts, you fill the buffer, then once it's filled you can pass off the first section of the buffer to the API. When the API returns, you can read in more data to overwrite the section of the buffer that your last API call consumed. Because the read is asynchronous, it will fill the buffer while the API is playing music.

The implementation could be more complex than this, i.e. using a circular buffer or waiting until a few of the sections have been consumed, then reading in multiple sections at once, instead of reading in one section at a time.

dauphic
@dauphic Nice idea, if only this API supported asynchronous I/O which doesn't seem to be a case.
Vlad Lazarenko