Hi,
I've got a c# application that plays simple wav files through directsound. With the test data I had, the code worked fine. However when I used real-world data, it produced a very unhelpful error on creation of the secondary buffer: "ArgumentException: Value does not fall within the expected range."
The test wavs had a 512kbps bit rate, 16bit audio sample size, and 32kHz audio sample rate. The new wavs is 1152kbps, 24bit and 48kHz respectively. How can I get directsound to cope with these larger values, or if not how can I programatically detect these values before attempting to play the file?
it's managed DirectX v9.00.1126 I'm using, and I've included some sample code below:
using DS = Microsoft.DirectX.DirectSound;
...
DS.Device device = new DS.Device();
device.SetCooperativeLevel(this, CooperativeLevel.Normal);
...
BufferDescription bufferDesc = new BufferDescription();
bufferDesc.ControlEffects = false;
...
try
{
SecondaryBuffer sound = new SecondaryBuffer(path, bufferDesc, device);
sound.Play(0, BufferPlayFlags.Default);
}
...
Additional info: the real-world wav files won't play in windows media player either, telling me a codec is needed to play the file, while they play fine in winamp.
Additional info 2: Comparing the bytes of the working test data and the bad real-world data, I can see that past the RIFF chunk, the bad data has a "bext" chunk, that the internet informs me is metadata associated with the broadcast audio extension, while the test data goes straight into a fmt chunk. There /is/ a fmt chunk in the bad data, so I don't know if it's badly-formed or if the loaders should be looking further for fmt data. I can see if I can get some information on this rouge bext chunk from the people supplying me the data - if they can remove it my code may still work.