views:

66

answers:

2

I know the WAV file format uses signed integers for 16-bit samples. It also stores them in little-endian order, meaning the lowest 8 bits come first, then the next, etc. Is the special sign bit on the first byte, or is the special sign bit always on the most significant bit (highest value)?

Meaning:
Which one is the sign bit in the WAV format?

++---+---+---+---+---+---+---+---++---+---+---+---+---+---+---+---++
|| a | b | c | d | e | f | g | h || i | j | k | l | m | n | o | p ||
++---+---+---+---+---+---+---+---++---+---+---+---+---+---+---+---++
--------------------------- here -> ^ ------------- or here? -> ^

i or p?

+2  A: 

The sign bit is the most significant bit on any two's-complement machine (like the x86), and thus will be in the last byte in a little-endian format

Just cause i didn't want to be the one not including ASCII art... :)

/---------------first byte ------------\ /--------------second byte-------------\
+-------------------------------------------------------------------------------+
|  0 |  1 |  2 |  3 |  4 |  5 |  6 |  7 |  8 |  9 | 10 | 11 | 12 | 13 | 14 | 15 |
+-------------------------------------------------------------------------------+
                                                               sign bit -----^

Bits are basically "backwards" from how most people think about them, which is why the high byte is last. But it's all consistent, and "bit 15" comes after "bit 0" just like other addresses work. You don't have to do any bit twiddling, because the hardware talks in terms of bytes at all but the lowest levels -- so when you read a byte, it looks just like you'd expect. Just look at the most significant bit of your word, or the last byte of it (if you're reading a byte at a time), and there's your sign bit.

Note, though, that two's complement doesn't exactly designate a particular bit as the "sign bit". That's just a very convenient side effect of how the numbers are represented. For 16-bit numbers, -x is equal to 65536-x rather than 32768+x (which would be the case if the upper bit were strictly the sign).

cHao
If you're reading actual bytes rather than an ASCII representation, then yes -- the lower 8 bits (and thus, the LSB) will *always* be the first byte, and the upper 8 (MSB included) will *always* be the last -- that's what "little endian" means. But the bits within the byte are stored however the hardware wants to store them. You don't have to know or care how that works unless you're building a hard drive.
cHao
@cHao I'm talking about how it is stored in the WAV file format, not in hardware. I read that the samples are stored as little endian signed 16-bit integers.
Leo Izen
cHao
Don't get too caught up in whether the bits go from "left to right" or "right to left" or whatever. Bytes are bytes, and they'll have the same value no matter how they're drawn. Only the hardware needs to know about the order of bits in a byte. The big thing you have to remember is that the least significant *byte* comes first, and the most significant *byte* is last. The most significant bit of the whole value will always be the most significant bit of the most significant byte.
cHao
A: 

signed int, little endian:

byte 1(lsb)       byte 2(msb)
---------------------------------
7|6|5|4|3|2|1|0 | 7|6|5|4|3|2|1|0|
----------------------------------
                  ^
                  | 
                 Sign bit

You only need to concern yourself with that when reading/writing a short int to some external media. Within your program, the sign bit is the most significant bit in the short, no matter if you're on a big or little endian platform.

nos