Simple: A double has 52 bits of precision assuming IEEE. So generate a 52 bit (or larger) unsigned random integer (for example by reading bytes from dev/urandom), convert it into a double and divide it by 2^(number of bits it was).
This gives a numerically uniform distribution (in that the probability of a value being in a given range is proportional to the range) down to the 52nd binary digit.
Complicated: However, there are a lot of double values in the range [0,1) which the above cannot generate. To be specific, half the values in the range [0,0.5) (the ones that have their least significant bit set) can't occur. Three quarters of the values in the range [0,0.25) (the ones that have either of their least 2 bits set) can't occur, etc, all the way to only one positive value less than 2^-51 being possible, despite a double being capable of representing squillions of such values. So it can't be said to be truly uniform across the specified range to full precision.
Of course we don't want to choose one of those doubles with equal probability, because then the resulting number will on average be too small. We still need the probability of the result being in a given range to be proportional to the range, but with a higher precision on what ranges that works for.
I think the following works. I haven't particularly studied or tested this algorithm (as you can probably tell by the way there's no code), and personally I wouldn't use it without finding proper references indicating it's valid. But here goes:
- Start the exponent off at 52 and choose a 52-bit random unsigned integer (assuming 52 bits of mantissa).
- If the most significant bit of the integer is 0, increase the exponent by one, shift the integer left by one, and fill the least significant bit in with a new random bit.
- Repeat until either you hit a 1 in the most significant place, or else the exponent gets too big for your double (1023. Or possibly 1022).
- If you found a 1, divide your value by 2^exponent. If you got all zeroes, return 0 (I know, that's not actually a special case, but it bears emphasis how very unlikely a 0 return is [Edit: actually it might be a special case - it depends whether or not you want to generate denorms. If not then once you have enough 0s in a row you discard anything left and return 0. But in practice this is so unlikely as to be negligible, unless the random source isn't random).
I don't know whether there's actually any practical use for such a random double, mind you. Your definition of random should depend to an extent what it's for. But if you can benefit from all 52 of its significant bits being random, this might actually be helpful.