Currently, SDL_Mixer has two types of sound resources: chunks and music.
Apart from the API and supported formats limitations, are there any reasons not to load and play music as a SDL_Chunk and channel? (memory, speed, etc.)
Currently, SDL_Mixer has two types of sound resources: chunks and music.
Apart from the API and supported formats limitations, are there any reasons not to load and play music as a SDL_Chunk and channel? (memory, speed, etc.)
The API is the real issue. The "music" APIs are designed to deal with streaming compressed music, while the "sound" APIs aren't. Then again, if you manage to make it work in your app, then it works.
I haven't looked at the SDL code, but my guess would be the "chunks" are intended for smaller sound samples, and are cached in memory, decoded, in their entirety while the "music" is streamed (not cached in memory in its entirety, but decoded and buffered as needed, with the assumption that it would, for the most part, be played from the beginning, and continuously from that point, with maybe some reset back to the beginning occasionally)
So the reason is memory. You don't want to decode say, 4 minutes of a 16-bit stereo song into memory, as it will eat 44100Hz * 2bytes * 2channels *4minutes *60sec/min == 42336000 bytes if you attempt it, when you can decode and buffer smaller pieces of it.
OTOH, if you have the ~10Mb of RAM per minute of music to waste and you need the CPU that would be consumed by the on-the-fly decoding... you could probably use chunks.