To extract the color information you have to work with bitwise operators. Colors are stored like 0xAARRGGBB so you have to extract those parts. The little snippet should do the job:
var hex: uint = 0x00ff00ff;
var a: int = (hex >>> 0x18) & 0xff;
var r: int = (hex >>> 0x10) & 0xff;
var g: int = (hex >>> 0x08) & 0xff;
var b: int = hex & 0xff;
trace(a, r, g, b);
Depending on how you specify RGBA values the answer might change. If you use RGBA values where each channel is specified by one byte, you can express exactly as much colors as in a hex value. Your "hex value" is an unsigned integer consisting of 4 bytes. In Flash there is also the Number type which consitsts of 64-bit so each channel would be expressed a 8-bytes instead of one. You can imagine that you have much more precision. However, probably your screen is set to "True Color (32-bit)" and will not be able to show that many colors. You go even beyond double-precision (64bit) and use decimal precision, which is mostly used for financial calculations. Each value consists of 128-bit which makes 16 bytes for each channel. That is 16 times more then when using your hex representation. Of course that is much more precision but your monitor will not be able to handle it.
A side note. Your image size is also width*height*channels*(bytes per channel). So if you use one byte per channel which is 8-bit you get already enough colors. For a big image the difference is quite big if you go from 8-bit to 32-bit per channel.
A 1024x768 image with 8-bit precision is 3072kb whereas the same 32-bit image is 12288kb in memory. That same image would take up 24576kb using Flash's Number type. So you are probably better of staying with 8-bit precision which should be enough in most cases.
I hope that helps and you have a better understanding now of images and their structure.