I am studying a sound converting algorithm where an array of signed shorts is received.
At a given point in the algorithm it converts the samples from 16 bits to 14 bits, and it does it like this:
int16_t sample = (old_sample + 2) >> 2;
For me its clear that the shifting is needed as we want to get rid of the least 2 significant bits, but what about the +2
there?