The binary format shouldn't change - it would certainly be a breaking change to existing specifications. It's defined to be in IEEE754 / IEC 60559:1989 format, as Jimmy said. (C# 3.0 language spec section 1.3; ECMA 335 section 8.2.2). The code in DoubleConverter should be fine and robust.
For the sake of future reference, the relevant bit of the code is:
// Translate the double into sign, exponent and mantissa.
long bits = BitConverter.DoubleToInt64Bits(d);
// Note that the shift is sign-extended, hence the test against -1 not 1
bool negative = (bits < 0);
int exponent = (int) ((bits >> 52) & 0x7ffL);
long mantissa = bits & 0xfffffffffffffL;
// Subnormal numbers; exponent is effectively one higher,
// but there's no extra normalisation bit in the mantissa
if (exponent==0)
{
exponent++;
}
// Normal numbers; leave exponent as it is but add extra
// bit to the front of the mantissa
else
{
mantissa = mantissa | (1L<<52);
}
// Bias the exponent. It's actually biased by 1023, but we're
// treating the mantissa as m.0 rather than 0.m, so we need
// to subtract another 52 from it.
exponent -= 1075;
if (mantissa == 0)
{
return "0";
}
/* Normalize */
while((mantissa & 1) == 0)
{ /* i.e., Mantissa is even */
mantissa >>= 1;
exponent++;
}
The comments made sense to me at the time, but I'm sure I'd have to think for a while about them now. After the very first part you've got the "raw" exponent and mantissa - the rest of the code just helps to treat them in a simpler fashion.