To me this essentially means that we
just (comparing to initial RGB image)
take a chunk of color information and
dispose it while applying the RGB ->
YCbCr transformation.
No information gets disposed by the transformation itself. The transformation is reversible in a mathematical sense. E.g. if you convert a color to YCbCr and transform the result back to RGB you get the same color back. In a perfect world after all.
In practice there is a loss of information. Assume that you start with three bytes in RGB. If you convert to YCbCr you get three values of which two, namely Cb and Cr don't fit into 8 bit anymore. Speaking technically the two representations RGB and YUV have a different gamut (http://en.wikipedia.org/wiki/Gamut)
This information loss is fortunately rarely visible. Important side-node: This gamut thing is an unwanted side-effect and has nothing to do with the choice of using YCbCr at the first place.
The point of using YCbCr is, that the data stored in Y is the most important. It is the brightness, or the gray-scale value. The data in Cb and Cr are the color information with brightness subtracted so to say.
Now our eyes aren't that good at picking subtle differences in color, but they are sensitive to shades of intensity. To make use of this in jpeg only a low resolution image of Cb and Cr are stored and Y is stored at full resolution. There are different ways to do this with the most common one to leave out every other pixel from Cb and Cr in x and y. That reduces the space requirements by a factor of four for Cb and Cr.
Where does the disposed part of color
information comes from or how is it
handled
It does not magically come back. The information is lost forever. However, since the information wasn't that important to begin with we don't see much artifacts.
In jpeg, the left out pixels of Cb and Cr panes are approximated by upscaling the Cb and Cr plane again. Some decoders just replicate the missing pixels by picking a neigbour, other do linear interpolation.