In follow up to this question, it
appears that some numbers cannot be
represented by floating point at all,
and instead are approximated.
Correct.
How are floating point numbers stored?
Is there a common standard for the different sizes?
As the other posters already mentioned, almost exclusively IEEE754 and its successor
IEEE754R. Googling it gives you thousand explanations together with bit patterns and their explanation.
If you still have problems to get it, there are two still common FP formats: IBM and DEC-VAX. For some esoteric machines and compilers (BlitzBasic, TurboPascal) there are some
odd formats.
What kind of gotchas do I need to watch out for if I use floating point?
Are they cross-language compatible (ie, what conversions do I need to deal with to
send a floating point number from a python program to a C program over TCP/IP)?
Practically none, they are cross-language compatible.
Very rare occuring quirks:
IEEE754 defines sNaNs (signalling NaNs) and qNaNs (quiet NaNs). The former ones cause a trap which forces the processor to call a handler routine if loaded. The latter ones don't do this. Because language designers hated the possibility that sNaNs interrupt their workflow and supporting them enforce support for handler routines, sNaNs are almost always silently converted into qNaNs.
So don't rely on a 1:1 raw conversion. But again: This is very rare and occurs only if NaNs
are present.
You can have problems with endianness (the bytes are in the wrong order) if files between different computers are shared. It is easily detectable because you are getting NaNs for numbers.