Can someone tell me why big endian / little endian can affect the wave representation of an audio signal?
The audio signal can be thought of as a stream. If you're writing in BigEndian and reading in LittleEndian, you'll get garbage as you read your stream.
If, for example, you're using 16bit samples in your audio data, big-endian and little-endian processors would store them differently in memory (or when reading and writing from an audio file).
e.g.
The sample represented by the hex number 0x1234, would be stored as 0x12 0x34
in a big-endian architecture, but as 0x34 0x12
in a little-endian one.
Analogy:
If a stranger on the Internet gives you the date "10/11" then you can't be sure if they mean 10th of Nov, or 11th of Oct, so you'd need to know what formatting the person was using to get the correct date.
This is how it is with binary data as well. Some computers/libraries/modules insist the two bytes 0 and 1 (in that order), represent a 16-bit value of 256, others think it is the value 1. So when you are talking to someone (microphone, file of audio data, internet stream) you need to know how they represent values to convert them into the representation that your computer uses.