tags:

views:

52

answers:

2

I'm trying to sort the difference between Sound.readData and Sound.lock in the FMOD library (I'm programming in C#/C++ but I'll take the answer in any language!). The end goal is to create a view of the waveform, which I understand cannot be done (easily) with Channel.getWaveData.

I have been able to get both the Sound.readData and Sound.lock approach to return the same data with createStream and createSound, respectively (not sure if it's valid - e.g. decoded - data yet) . I'd like to use the stream approach if possible to minimize memory footprint but I'm not really sure what it is that I'm reading now and the documentation isn't entirely clear.

A: 

After more research I'm fairly sure there's no significant difference between the two. I'm probably going to end up using readData as it seems to be a little easier, more flexible. Also, lock is a confusing name for this method :).

Jeff
+1  A: 

Essentially the difference between the two is what you are accessing.

With Sound::lock you are locking the sample buffer of the sound, so when you load with createSound it decompressed the file to PCM and puts it in the sample buffer. You use this function to directly access that buffer (you lock the portion of it you want). If you are on a console that data may be a native compressed format. As a side note the idea of "locking" a sound comes for the DirectSound API where you would "lock" a buffer to prevent access to it while you read from or write to it, when you are done you unlock given access back to the audio system.

Sound::readData is a more gradual way to pull (stream) PCM data out of the sound, here you are actually decoding the compressed data to PCM with each readData call. You do this in smaller blocks and you always get the final decoded PCM data. This approach is more flexible and memory efficient.

For example you could load a 10MB MP3 as a stream then decode it to PCM using Sound::readData in chunks. Otherwise you would need to load it as a sample (which decodes it to PCM at createSound time) then lock a massive buffer to get the PCM.

getWaveData is used for displaying the waveform at the current time of playback, it should not be used to decode a complete waveform of the file. Depending on the frequency of calls to getWaveData, you may get the same block of data multiple times as it's a single snapshot in time.

Mathew Block