views:

60402

answers:

23

I cant get my head around this, which is more random?

rand()

OR

rand() * rand()

I´m finding it a real brain teaser, could you help me out?

Thanks in advance!

EDIT:

Intuitively I know that the Mathematical answer will be that they are equally random but I can't help but think that if you "run the random number algorithm" twice when you multiply the two together you'll create something more random than just doing it once.

+13  A: 

"random" vs. "more random" is a little bit like asking which Zero is more zero'y.

In this case, rand is a PRNG, so not totally random. (in fact, quite predictable if the seed is known). Multiplying it by another value makes it no more or less random.

A true crypto-type RNG will actually be random. And running values through any sort of function cannot add more entropy to it, and may very likely remove entropy, making it no more random.

abelenky
Note, this isn't squaring since each call with return a different value. Everything else is accurate though.
Matthew Scharley
Ok I will have to google crypto-type RNG but thanks for the answer! :)
Trufa
"running values through any sort of function cannot add more entropy to it" - What about a text compression algorithm? Isn't that something designed to increase the entropy of the resulting string?
CurtainDog
Again: It's "more random", when values are more distributed evenly within a range. e.g. All values within a range get their fair chance of getting chosen as the random value.
thenonhacker
@thenonhacker: You seem to be suggesting that sequentially cycling through a set of numbers (e.g. 1-10) is random. After all, each number comes up exactly 1 out of 10 times, which seems exceedingly evenly distributed and fair. But it is definitely not random. I don't know an "official" definition of random, but I believe when each bit has a 50/50 chance, unrelated to any other bit, and fully unpredictable, of being 1 or 0, then the resulting value will be random.
abelenky
@abelenky: Oh yes, it is. See the topmost answer to see my point. When all numbers get their fair chance of being displayed, there is no peaking and biasing on a large set of samples. I said large set of samples: Like you have to flip a coin 300 times, head side should roughly have a fair chance of appearing with tail side. If there is a bias towards the head than tail, then that is not random, and I can abuse that bias because now I know that heads has higher probability of appearing than tails.
thenonhacker
@thenonhacker: By your own description, the sequence "1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10..." is random. It is evenly distributed, with all numbers getting a fair chance. There is no peaking or biasing. Do you really consider that sequence random??? You need to change your definition. Random is not about the output, random is about the *process* used to create the output.
abelenky
@abelnky: Yes, if you get lucky, one of the Random Seeders can get you "1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10,1,2,3,4,5,6,7,8,9,10". So in Poker, you got Pair Aces in two consecutive times even with a very shuffled sequence of cards, that's luck. You can argue also with "10,9,8,7,6,5,4,3,2,1,10,9,8,7,6,5,4,3,2,1,10,9,8,7,6,5,4,3,2,1", another lucky random sequence.
thenonhacker
@abelnky: Here's exactly what I'm talking about: http://stackoverflow.com/questions/3956478/understanding-randomness/3956538#3956538
thenonhacker
@CurtainDog: Text-compression keeps the level of entropy the same while reducing the number of bits required to express that same amount of entropy.
Kennet Belenky
@thenonhacker, @abelenky: Even distributions are easy. What matters in a random number generator is the number of bits in the state of the random number generator. A zero-state random number generator (e.g. 4, 4, 4, 4, 4, ...) is completely predictable. A one-time-pad has as much state as the number of values it produces, thus making it impossible to predict. A convolution of two PNRGs will produce a PNRG with as many bits of entropy as they both contain, minus their covariance.
Kennet Belenky
@Kennet - Thanks, you've hugely cleared that up for me. @abelenky - cool, i get you now.
CurtainDog
+47  A: 

Neither is 'more random'.

rand() generates a predictable set of numbers based on a psuedo-random seed (usually based on the current time, which is always changing). Multiplying two consecutive numbers in the sequence generates a different, but equally predictable, sequence of numbers.

Addressing whether this will reduce collisions, the answer is no. It will actually increase collisions due to the effect of multiplying two numbers where 0 < n < 1. The result will be a smaller fraction, causing a bias in the result towards the lower end of the spectrum.

Some further explanations. In the following, 'unpredictable' and 'random' refer to the ability of someone to guess what the next number will be based on previous numbers.

Given seed x which generates the following list of values:

0.3, 0.6, 0.2, 0.4, 0.8, 0.1, 0.7, 0.3, ...

rand() will generate the above list, and rand() * rand() will generate:

0.18, 0.08, 0.08, 0.21, ...

Both methods will always produce the same list of numbers for the same seed, and hence are equally predictable. But if you look at the the results for multiplying the two calls, you'll see they are all under 0.3 despite a decent distribution in the original sequence. The numbers are biased because of the effect of multiplying two fractions. The resulting number is always smaller, therefore much more likely to be a collision despite still being just as unpredictable.

Matthew Scharley
+1 Note that on the other hand `rand()+rand()+rand()...` gets increasingly "less random" (if by random you mean uniformly distributed).
Thilo
@Thilo No, it doesn't... ? If a random variable is uniformly distributed in the range (0,1), and you sample the variable n times, and take the sum, it will just be uniformly distributed in the range (0,n).
Ok matthew you seem to know what you are talking abou but guide me through this, so, you say it is equally predictable but with more chance of collision? I dont get it! Thank you!!!
Trufa
+1 Great answer. One question though- is it likely that each rand() is based on the same seed, or a different seed? I know it depends on the time, but is it calculated so fast that it uses the same seed value?
DMan
@Trufa See detly's link: http://thedailywtf.com/Comments/Random-Stupidity.aspx?pg=2#182537
@Dman: Each call to `rand()` uses a persistent seed. You can set the seed via `srand()`. In fact, you *should* only set the seed once, otherwise you will drastically alter the randomness of your results (likely resulting in many duplicates from getting the first number in the sequence with identical seeds).
Matthew Scharley
@Trufa see my expanded worded explanation, and have a look at belisarius's lovely graphs for a more graphical explanation.
Matthew Scharley
@Matthew I think I´m starting to get this. Great edit, really!! I am with a pencil and paper know going through this :)
Trufa
So matthew, one more thing (this might be a whole other question) but when you want more randomness, whats the best way to go, "more or better" seed or better seeds or better algorithms to generate?
Trufa
@Trufa just trust `rand()` to actually be random, and don't try to 'enhance' it's randomness. Don't set the seed multiple times. Any individual seed is perfectly fine, as long as it's semi-random itself. Lots of implementations I've seen use the UNIX epoch as the seed, which changes every second and is unique every time it changes.
Matthew Scharley
@Trufa: You ask if it is equally predictable, but with more chance of collision... I think graph 2 from belisarius answer explains this. rand * rand will give many more results that are close to each other (many occurences between 0.1 and 0.2, a little fewer between 0.2 and 0.3 etc...), so if you round the results to 1 decimal, you will have more occurences of 0.1 than 0.2, and more of 0.2 than 0.3 etc. (although with more decimals they are unique).
awe
@Matthew well thank you very much then great answer!
Trufa
@user359996 rand()+rand() is not uniformly distributed. Add two dice, you are more likely to get 7 than 2.
Liam
Agreed with Liam. It's "more random", when values are more distributed evenly within a range. e.g. All values within a range have a similar chance of getting chosen as the random value.
thenonhacker
@thenonhacker See my definition of randomness in my post. Just because values tend towards one end of the spectrum doesn't increase the predictability of the exact values produced which is what I was referring to when I used the word random. I then went on to address the issue of the bias seperately.
Matthew Scharley
@Matthew: I don't think you have the right to define randomness as you like. If values are biased towards one end of the spectrum then the predictability of the exact values does increase. Check almost any [card counting scheme](http://en.wikipedia.org/wiki/Card_counting) to see that. The special example when the predictability increases from 1 in 10^308 to say 2 in 10^308 in the example may not seem much, but it is a change.
Muhammad Alkarouri
@Muhammad: Math is all about definitions, and I have every right to define anything I like the way I like, as long as I give sound reasoning why. There are many definitions for randomness (or perhaps, many partial definitions), including the one I gave. Also, as counter intuitive as it sounds bias doesn't increase predictability. Just because I get 0.1's more often from a particular function doesn't mean that you can predict what the next number to come out of the function will be. Prediction in the sense of an oracle is entirely different to probability.
Matthew Scharley
@Matthew: I believe I know about math. You can't redefine probability or the Gaussian distribution for example. I would love to see your definition [citation needed]. If I know that I am more likely to get 0.1 then I can predict 0.1 as the next number with high probability; that is the whole assumption behind machine learning for example. I gave a specific example with card counting: you don't know the next value but you can guess it with higher probability. In the case of the question the variance of rand()*rand() is clearly lower than that of rand(). Is the variance irrelevant as well?
Muhammad Alkarouri
+750  A: 

Just a clarification

Although the previous answers are right whenever you try to spot the randomness of a pseudo-random variable or its multiplication, you should be aware that while Random() is usually uniformly distributed, Random()Random() is not.

Example

Random() vs Random() Random()

So, both are "random", but their distribution is very different.

Another example

2 * Random() vs Random() + 1

Random() + Random() + Random() + Random()

Sum of distributions of 1, 2, 6, 10, 20 Random() sums

belisarius
+1. Since the OP probably wanted uniform distribution, this should be the accepted answer. And if you did `rand()+rand()`, you would end up with a "2d6"-type distribution with a fat center.
Thilo
@Thilo I was just on that when you posted your comment :)
belisarius
@Trufa as others said, you can't get more "randoness" from an initial distribution, what you get are different distributions of "equivalent randomness"
belisarius
Grr seems that http://imgur.com/ is down ... no images?
belisarius
@belisarius, I´m starting to grasp it now. imgur.com is coming and going... OHh and thank you very much!
Trufa
@Trufa seems working now. See the last picture
belisarius
+1 for the visual examples
Colin O'Dell
This is very interesting, but it kills me on the inside how anti-intuitive that is. I will give a more thorough look after I read a little more about distribution. Thank you very much!
Trufa
@strager Thanks for the edits. My english is awful!
belisarius
@Trufa: Maybe this will help with part of the intuition, at least for sums. Imagine taking the "average" of one rolled die. Now imagine taking the average of two dice. Now one hundred. What happens to the chance of getting a one or a six for the average as you add more dice?
John C
@John C. Nice one there! thanks
Trufa
How would the graph look like in C, with the returned type being an integer, and multiplication potentially causing an overflow? Also, how would the infamous C stdlib rand implementation affect the outcome when using 2 consecutive results?
geon
Upvote for the nice visuals.
Wouter Lievens
@geon I think that yours is a very good question on its own, or at least a good experiment to do. I am using a fairly good PRNG here (See http://www.wolfram.com/learningcenter/tutorialcollection/RandomNumberGeneration/RandomNumberGeneration.pdf page 18 "ExtendedCA")
belisarius
In some implementations, (rand()+rand() % (RAND_MAX+1)), may yield better numbers than a single random() call; in others it will be worse. The distribution of (rand()*rand() % (RAND_MAX+1)) is more "interesting". If (RAND_MAX+1) is prime, the distribution will be uniform except for a peak at zero (every value can be achieved RAND_MAX ways except zero, which can be achieved 2*RAND_MAX+1 ways). If (RAND_MAX+1) is non-prime, its factors will get extra hits.
supercat
I love this answer. Helpful and illustrative. +1
MdaG
Awesome answer!
rein
Amazingly intuitive answer, thanks!
Hamy
Is anyone else reminded of Jeff Atwood's gravitar? http://imgur.com/RTIP5.png
russau
@belisarius, how did you make these charts?
matt b
@matt b The Charts are one-liners in Mathematica. The code is the text in bold that precedes each graph. Mathematica is an awesome language for doing Plots!
belisarius
+1 Beautiful answer!
Daniel Earwicker
@belisarius: 550+ votes in 24 hours, this has to be a new record on SO!
Matthieu M.
@Matthieu What is most interesting is that the question does not match **"What is the most "[a-z]+\s(joke|feature|book|whatever)" you ever seen"**
belisarius
A picture is worth a thousand words :)
Darius Kucinskas
Histograms: They kick the a**es of people claiming there's no such thing as "more random"
thenonhacker
@thenonhacker: The histograms actually say nothing about the randomness of the number. A non-uniform distribution is not necessarily non-random (discounting quantization errors). For any distribution f(x) with a parametrically defined shape, there exists an equalization function h(y) such that h(f(x)) = c.
Kennet Belenky
OMG, E = m * sqr(c)!!! Kidding aside, the histograms help measure the bias of the random numbers generated, and a Uniformly-Distributed Histogram is something that you want for a Lottery, not the histograms with peaks and biases. That's the user was asking about here, and that's why he rewarded the answer above with the green checkmark.
thenonhacker
@thenonhacker: yes, the histograms do demonstrate bias, but they don't demonstrate non-randomness. Biased random numbers are not less random. As for the correct answer to the user's original question is, "don't try to be clever, you'll just make things worse," and this answer does get that point across.
Kennet Belenky
@Kennet Yep. You are right.
belisarius
Great answer, very clear.
Pim Jager
wow. epic answer. thanks.
KennyCason
+16  A: 

Some things about "randomness" are counter-intuitive.

Assuming flat distribution of rand(), the following will get you non-flat distributions:

  • high bias: sqrt(rand(range^2))
  • bias peaking in the middle: (rand(range) + rand(range))/2
  • low:bias: range - sqrt(rand(range^2))

There are lots of other ways to create specific bias curves. I did a quick test of rand() * rand() and it gets you a very non-linear distribution.

staticsan
This is the 2nd best answer for this topic -- it deals with statistics just like the main answer, and Not intuition.
thenonhacker
+18  A: 

Most rand() implementations have some period. I.e. after some enormous number of calls the sequence repeats. The sequence of outputs of rand() * rand() repeats in half the time, so it is "less random" in that sense.

Also, without careful construction, performing arithmetic on random values tends to cause less randomness. A poster above cited "rand() + rand() + rand() ..." (k times, say) which will in fact tend to k times the mean value of the range of values rand() returns. (It's a random walk with steps symmetric about that mean.)

Assume for concreteness that your rand() function returns a uniformly distributed random real number in the range [0,1). (Yes, this example allows infinite precision. This won't change the outcome.) You didn't pick a particular language and different languages may do different things, but the following analysis holds with modifications for any non-perverse implementation of rand(). The product rand() * rand() is also in the range [0,1) but is no longer uniformly distributed. In fact, the product is as likely to be in the interval [0,1/4) as in the interval [1/4,1). More multiplication will skew the result even further toward zero. This makes the outcome more predictable. In broad strokes, more predictable == less random.

Pretty much any sequence of operations on uniformly random input will be nonuniformly random, leading to increased predictability. With care, one can overcome this property, but then it would have been easier to generate a uniformly distributed random number in the range you actually wanted rather than wasting time with arithmetic.

Eric Towers
I had that thought too, that it would be going through the random generator period twice as fast.
Jared Updike
The sequence length will only be cut in half if it is even. If it's odd, you get r1*r2, r3*r4, ..., rn*r1, r2*r3, r4*r5, and the total length is the same.
Jander
+3  A: 

Floating randoms are based, in general, on an algorithm that produces an integer between zero and a certain range. As such, by using rand()*rand(), you are essentially saying int_rand()*int_rand()/rand_max^2 - meaning you are excluding any prime number / rand_max^2.

That changes the randomized distribution significantly.

rand() is uniformly distributed on most systems, and difficult to predict if properly seeded. Use that unless you have a particular reason to do math on it (i.e., shaping the distribution to a needed curve).

Fordi
@Fordi I think random(x)^2 excludes primes, but random()*random() does not.
belisarius
@belisarius : That's only the case if 1 is a possible outcome of the random process.
Joris Meys
@Joris Ahhh ok, I had a misunderstanding, tnx.
belisarius
A: 

Nothing in computer is randomize.It is pseudo-random.Use coin and throw it that real random.

Ok just try

R1 = random();
R2 = random();

OP = random(<set of operation that can change distribution>);

OP(R1,R2);

I think it must more random. Let me see your result.

Artiya4u
Could be problematic if I need to generate a million or a 100m or more numbers.
Richard
That and there are people who can manipulate a coin-toss to bias the odds in their favor.
Dan Bryant
Dice-O-Matic: http://gizmodo.com/5270195/automatic-dice-machine-records-13-million-rolls-a-day
Kevin
A portable non-pseudo-random generator http://xkcd.com/221/
belisarius
That isn't randomized either. A flipped coin is "no perceived pattern," which is an important difference.
Christopher W. Allen-Poole
A good computer implementation is probably more random than flipping a coin.
Bill K
Trufa
Funny that the exact same joke drawn by a famous guy got upvoted. I'll never understand people and humor
Joris Meys
fwiw, US coins are not "fair" - pennies will come up heads between .1 and .3 percent of the time more than tails
warren
A: 

Answer would it depends, hopefully the rand()*rand() would be more random than rand(), but as :

  • both answers depends on the bit size of your value
  • that in most of the case you generate depend on a pseudo-random algorithm (which is mostly a number generator that depends on your computer clock, and not that much random).
  • make your code more readable ( and not invoke some random voodoo god of random with this kind of mantra ).

Well if you check any of these above I suggest you go for the simple "rand()". Because your code would be more readable (wouldn't ask yourself why you did write this, for ...well... more than 2 sec), easy to maintain (if you want to replace you rand function with a super_rand).

If you want better random, I would recommend your to stream it from any source that provide enough noise (radio static), and then a simple rand() should be enough.

dvhh
A: 

Here's a short no-BS answer.

(1) random()*random() is plain wrong. Don't do it. Your "intuition" was totally and completely wrong.

(2) the guy above has generously posted a number of graphs to show why this is wrong in this case.

(3) don't be confused by experts talking amongst themselves saying "it's random but the distribution is not even". what you're looking for is a "random" number like momma thinks of it, such as when you spin a roulette wheel. random*random is simply wrong, your intuition is totally incorrect.

(4) if you actually want good random numbers for an actual money-involved purpose, it's all about the seed you use. the random() part is no big deal.

(5) you would have to extensively look in to the nature of seeds, read a lot about the issue, and look in to things like physical devices (ie, plug-in cards for servers) that generate random entropy for you, so many bits per second.

(6) if you are doing something serious that involves encryption or money (gaming software or the like), simply walk away from the job, it's just not worth bothering with unless you really know what you're doing. you'll end up with massive financial liability and embarrassment to boot. Honest.

(7) if you just need a random number to make a character pop up in your next iphone game, then the short answer is "random*random is plain wrong, so forget that. get the best seed you can, read up on the latest on EZ-Seeding for your particular hardware/environment!"

Hope it helps!

Joe Blow
Not really short and not BS-free. It is *not* "all about the seed you use." You seed a PRNG once, after which it proceeds deterministically. A single good seed makes the start point unpredictable, but it doesn't make the sequence any more random. And that stuff about distributions is not "experts talking among themselves", it actually matters.
walkytalky
Distribution issues do not matter at all TO THE GUY ASKING THE QUESTION. What he wants is a uniform distribution. It's unlikely he understands distributions, and he should not get involved in the issue. The important issue for him - the only issue - is the quality of the seed he uses. (Which I'm guessing he knows nothing about, until it was mentioned here!)
Joe Blow
@Joe Blow, THE GUY ASKING THE QUESTION here, I thank you for your answer but I did not like it because it is not really answering my question. I heard about distribution before (not an expert by all means) but I´m interested in them now. @belisarius was a magnificent one, it was not only an actually a challenge and an invitation to know more about this complex but very interesting subject. But again I thank you for you answer and respect your right to give your point of view. Cheers!
Trufa
Hopefully you now understand that (1) random*random is totally and completely wrong and will result in you losing your job (2) you need to learn about and use seeds properly and (3) if you are interested you'll have to look in to how pseudo-random-number-generators work! Check out the mersenne twister. Good luck! Very few people understand randomness, and a lot of people think they do but don't.
Joe Blow
+8  A: 

When in doubt about what will happen to the combinations of your random numbers, you can use the lessons you learned in statistical theory.

In OP's situation he wants to know what's the outcome of X*X = X^2 where X is a random variable distributed along Uniform[0,1]. We'll use the CDF technique since it's just a one-to-one mapping.

Since X ~ Uniform[0,1] it's cdf is: fX(x) = 1 We want the transformation Y <- X^2 thus y = x^2 Find the inverse x(y): sqrt(y) = x this gives us x as a function of y. Next, find the derivative dx/dy: d/dy (sqrt(y)) = 1/(2 sqrt(y))

The distribution of Y is given as: fY(y) = fX(x(y)) |dx/dy| = 1/(2 sqrt(y))

We're not done yet, we have to get the domain of Y. since 0 <= x < 1, 0 <= x^2 < 1 so Y is in the range [0, 1). If you wanna check if the pdf of Y is indeed a pdf, integrate it over the domain: Integrate 1/(2 sqrt(y)) from 0 to 1 and indeed, it pops up as 1. Also, notice the shape of the said function looks like what belisarious posted.

As for things like X1 + X2 + ... + Xn, (where Xi ~ Uniform[0,1]) we can just appeal to the Central Limit Theorem which works for any distribution whose moments exist. This is why the Z-test exists actually.

Other techniques for determining the resulting pdf include the Jacobian transformation (which is the generalized version of the cdf technique) and MGF technique.

EDIT: As a clarification, do note that I'm talking about the distribution of the resulting transformation and not its randomness. That's actually for a separate discussion. Also what I actually derived was for (rand())^2. For rand() * rand() it's much more complicated, which, in any case won't result in a uniform distribution of any sorts.

Wil
+1 yep. This is indeed the theory behind the plots.
belisarius
+32  A: 

Here's a simple answer. Consider Monopoly. You roll two six sided dice (or 2d6 for those of you who prefer gaming notation) and take their sum. The most common result is 7 because there are 6 possible ways you can roll a 7 (1,6 2,5 3,4 4,3 5,2 and 6,1). Whereas a 2 can only be rolled on 1,1. It's easy to see that rolling 2d6 is different than rolling 1d12, even if the range is the same (ignoring that you can get a 1 on a 1d12, the point remains the same). Multiplying your results instead of adding them is going to skew them in a similar fashion, with most of your results coming up in the middle of the range. If you're trying to reduce outliers, this is a good method, but it won't help making an even distribution.

(And oddly enough it will increase low rolls as well. Assuming your randomness starts at 0, you'll see a spike at 0 because it will turn whatever the other roll is into a 0. Consider two random numbers between 0 and 1 (inclusive) and multiplying. If either result is a 0, the whole thing becomes a 0 no matter the other result. The only way to get a 1 out of it is for both rolls to be a 1. In practice this probably wouldn't matter but it makes for a weird graph.)

valadil
"Whereas a 7 can only be rolled on 1,1" ... surely you meant "Whereas a 2..."
PatrickvL
Edited to "Whereas a 2..."
Liam
"Multiplying your results instead of adding them is going to skew them in a similar fashion, with most of your results coming up in the middle of the range." - check this assertion against the second graph in the answer from belisarius.
Daniel Earwicker
+1 Great explanation a dummy like me can understand.
FannyPack
+1 very clear and accessible explanation.
C-Mo
+1  A: 

Most of these distributions happen because you have to limit or normalize the random number.

We normalize it to be all positive, fit within a range, and even to fit within the constraints of the memory size for the assigned variable type.

In other words, because we have to limit the random call between 0 and X (X being the size limit of our variable) we will have a group of "random" numbers between 0 and X.

Now when you add the random number to another random number the sum will be somewhere between 0 and 2X...this skews the values away from the edge points (the probability of adding two small numbers together and two big numbers together is very small when you have two random numbers over a large range).

Think of the case where you had a number that is close to zero and you add it with another random number it will certainly get bigger and away from 0 (this will be true of large numbers as well as it is unlikely to have two large numbers (numbers close to X) returned by the Random function twice.

Now if you were to setup the random method with negative numbers and positive numbers (spanning equally across the zero axis) this would no longer be the case.

Say for instance RandomReal({-x, x}, 50000, .01) then you would get an even distribution of numbers on the negative a positive side and if you were to add the random numbers together they would maintain their "randomness".

Now I'm not sure what would happen with the Random() * Random() with the negative to positive span...that would be an interesting graph to see...but I have to get back to writing code now. :-P

+7  A: 

The concept you're looking for is "entropy," the "degree" of disorder of a string of bits. The idea is easiest to understand in terms of the concept of "maximum entropy".

An approximate definition of a string of bits with maximum entropy is that it cannot be expressed exactly in terms of a shorter string of bits (ie. using some algorithm to expand the smaller string back to the original string).

The relevance of maximum entropy to randomness stems from the fact that if you pick a number "at random", you will almost certainly pick a number whose bit string is close to having maximum entropy, that is, it can't be compressed. This is our best understanding of what characterizes a "random" number.

So, if you want to make a random number out of two random samples which is "twice" as random, you'd concatenate the two bit strings together. Practically, you'd just stuff the samples into the high and low halves of a double length word.

On a more practical note, if you find yourself saddled with a crappy rand(), it can sometimes help to xor a couple of samples together --- although, if its truly broken even that procedure won't help.

PachydermPuncher
I had never thought about random number generations via xor, but I guess you can take the concept pretty far (http://en.wikipedia.org/wiki/Mersenne_twister)! Thanks for the answer.
Gabriel Mitchell
I'm really struggling to grok this answer... Isn't maximum entropy defeated by the answers given in http://stackoverflow.com/questions/3956478/understanding-randomness/3963165#3963165 and http://stackoverflow.com/questions/3956478/understanding-randomness/3963140#3963140. In these cases the number picked can't be compressed but you'd be hard pressed to call them random.
CurtainDog
+1 Beautiful as the accepted answer is, this is my favourite. When it comes to computers, always think in bits - much less confusing and more relevant than trying to think in terms of reals. (I wrote my answer and then noticed this one, so mine is nothing more than an expansion of this one - maybe with some entropy added).
Daniel Earwicker
@CurtainDog xkcd's random number `4` or binary `0100` can be compressed to zero bits. The decompression program would simply return '4'. It doesn't get less random than that. The problem with dilbert is, we do not know if we can compress it to zero bits (decompressing by always returning 'nine'). It might return eight aswell, then we could compress to 1 bit. Decompressing by: 0->nine, 1->eight. We would have 1 random bit.
Ishtar
+16  A: 

It might help to think of this in more discrete numbers. Consider want to generate random numbers between 1 and 36, so you decide the easiest way is throwing two fair, 6-sided dice. You get this:

     1    2    3    4    5    6
  -----------------------------
1|   1    2    3    4    5    6
2|   2    4    6    8   10   12
3|   3    6    9   12   15   18
4|   4    8   12   16   20   24   
5|   5   10   15   20   25   30
6|   6   12   18   24   30   36

So we have 36 numbers, but not all of them are fairly represented, and some don't occur at all. Numbers near the center diagonal (bottom-left corner to top-right corner) will occur with the highest frequency.

The same principles which describe the unfair distribution between dice apply equally to floating point numbers between 0.0 and 1.0.

Juliet
+1 for showing more concretely, the change in distribution when multiplying the random numbers. The matrix helped more than just the words or even a distribution graph.
Marjan Venema
+38  A: 

my answer to all random number questions is this alt text

So I guess both methods are as random although my gutfeel would say that rand()*rand() is less random because it would seed more zeroes. As soon as one rand() is 0, the total becomes 0

Janco
+1 for good humer I guess :)
Trufa
My answer to all answers using this strip is this: I like humour, but it **must** be CW!
Andreas Rejbrand
@Andreas Rejbrand: That's like saying, I like humour, but I don't want to pay for it
Andomar
@Andomar: No, it isn't. Not at all. Do you know what CW is?
Andreas Rejbrand
@Andreas Rejbrand: CW is a weapon that kills interesting questions by denying reputation to those that answer it. Looks like it got nerfed http://meta.stackoverflow.com/questions/392/should-the-community-wiki-police-be-shut-down (which is perhaps why this interesting question pops up!)
Andomar
@Andomar - Yes, CW kills interesting questions, but (from the [FAQ](http://stackoverflow.com/faq)) "Reputation is a rough measurement of how much the community trusts you." If you include a funny, [copyrighted](http://www.dilbert.com/terms/) image in your answer, it will make me think your answer is cool, and I will probably think _you_ are cool too, but it doesn't make you more trust-worthy - hence, ideally, no rep should be awarded. Whether that means CW, or whether it means one shouldn't vote the answer up is another issue.
LeguRi
the "random generator" troll in the cartoon might be just a savant reciting π, and just reaching the [Feynman point](http://en.wikipedia.org/wiki/Feynman_point). btw, **are π digits random?** :)
mykhal
@mykhal Always thought the "999999" was that. But your main question remains open (AFAIK).
belisarius
@mykhai: A less controversial formulation would be "Are the digits of pi normal?" (it *looks* that way)
Piskvor
+22  A: 

The obligatory xkcd ...
return 4; // chosen by fair dice roll, guaranteed to be random.

crowne
@crowne danmn this always ends up appearing when the word "random appears" :) I was waiting for it!!
Trufa
I like humour, but it **must** be CW.
Andreas Rejbrand
@Andreas Rejbrand - why should this "humor" answer be CW?
warren
If it is not CW, reputation will be awared the poster of the answer every time it is up-voted (160 rep so far). Now, reputation is like grades in school -- it should be a certificate of technical (in this case, programming) proficiancy. Therefore, one should not be able to gain reputation by posting something that is easily upvoted but that needs no such proficiancy. Furthermore, the reputation score also determines the privileges of the user. For instance, at 10 000 score, the user gets access to moderation tools at StackOverflow.
Andreas Rejbrand
+2  A: 

As a total aside nobody will read on an already too-long thread, Random numbers are funny.

When you average, say, a million random numbers (0-1), you ALWAYS end up with .5 +/- a very small amount based on how many numbers you are averaging.

It seems counter-intuitive, like if they were REALLY random they might average .7, but the fact is that over a sufficiently long period, random numbers WILL average very reliably.

Even averaging 100 randoms you will rarely (if ever) end up outside +/-.1

What people tend to forget is how staggering and powerful probabilities are. The average of a million randoms Might be outside +/-.1, but it's probably many times more likely that you will die of a heart attack than see such results, so it won't happen even though it absolutely "Could".

Many people, (generally very smart ones), go straight for "your random number generator is generating a pattern" without recognizing the power of probabilities. It's this same lack of understanding that leads people to buy lottery tickets.

Bill K
In other words do not confuse probability and possibility. Many events are possible, but when tempered with probability, it ain't gonna happen. That is exactly why I never buy lottery tickets.
NealB
+9  A: 

As others have said, the easy short answer is: No, it is not more random, but it does change the distribution.

Suppose you were playing a dice game. You have some completely fair, random dice. Would the die rolls be "more random" if before each die roll, you first put two dice in a bowl, shook it around, picked one of the dice at random, and then rolled that one? Clearly it would make no difference. If both dice give random numbers, then randomly choosing one of the two dice will make no difference. Either way you'll get a random number between 1 and 6 with even distribution over a sufficient number of rolls.

I suppose in real life such a procedure might be useful if you suspected that the dice might NOT be fair. If, say, the dice are slightly unbalanced so one tends to give 1 more often than 1/6 of the time, and another tends to give 6 unusually often, then randomly choosing between the two would tend to obscure the bias. (Though in this case, 1 and 6 would still come up more than 2, 3, 4, and 5. Well, I guess depending on the nature of the imbalance.)

There are many definitions of randomness. One definition of a random series is that it is a series of numbers produced by a random process. By this definition, if I roll a fair die 5 times and get the numbers 2, 4, 3, 2, 5, that is a random series. If I then roll that same fair die 5 more times and get 1, 1, 1, 1, 1, then that is also a random series.

Several posters have pointed out that random functions on a computer are not truly random but rather pseudo-random, and that if you know the algorithm and the seed they are completely predictable. This is true, but most of the time completely irrelevant. If I shuffle a deck of cards and then turn them over one at a time, this should be a random series. If someone peeks at the cards, the result will be completely predictable, but by most definitions of randomness this will not make it less random. If the series passes statistical tests of randomness, the fact that I peeked at the cards will not change that fact. In practice, if we are gambling large sums of money on your ability to guess the next card, then the fact that you peeked at the cards is highly relevant. If we are using the series to simulate the menu picks of visitors to our web site in order to test the performance of the system, then the fact that you peeked will make no difference at all. (As long as you do not modify the program to take advantage of this knowledge.)

EDIT

I don't think I could my response to the Monty Hall problem into a comment, so I'll update my answer.

For those who didn't read Belisarius link, the gist of it is: A game show contestant is given a choice of 3 doors. Behind one is a valuable prize, behind the others something worthless. He picks door #1. Before revealing whether it is a winner or a loser, the host opens door #3 to reveal that it is a loser. He then gives the contestant the opportunity to switch to door #2. Should the contestant do this or not?

The answer, which offends many people's intuition, is that he should switch. The probability that his original pick was the winner is 1/3, that the other door is the winner is 2/3. My initial intuition, along with that of many other people, is that there would be no gain in switching, that the odds have just been changed to 50:50.

After all, suppose that someone switched on the TV just after the host opened the losing door. That person would see two remaining closed doors. Assuming he knows the nature of the game, he would say that there is a 1/2 chance of each door hiding the prize. How can the odds for the viewer be 1/2 : 1/2 while the odds for the contestant are 1/3 : 2/3 ?

I really had to think about this to beat my intuition into shape. To get a handle on it, understand that when we talk about probabilities in a problem like this, we mean, the probability you assign given the available information. To a member of the crew who put the prize behind, say, door #1, the probability that the prize is behind door #1 is 100% and the probability that it is behind either of the other two doors is zero.

The crew member's odds are different than the contestant's odds because he knows something the contestant doesn't, namely, which door he put the prize behind. Likewise, the contestent's odds are different than the viewer's odds because he knows something that the viewer doesn't, namely, which door he initially picked. This is not irrelevant, because the host's choice of which door to open is not random. He will not open the door the contestant picked, and he will not open the door that hides the prize. If these are the same door, that leaves him two choices. If they are different doors, that leaves only one.

So how do we come up with 1/3 and 2/3 ? When the contestant originally picked a door, he had a 1/3 chance of picking the winner. I think that much is obvious. That means there was a 2/3 chance that one of the other doors is the winner. If the host game him the opportunity to switch without giving any additional information, there would be no gain. Again, this should be obvious. But one way to look at it is to say that there is a 2/3 chance that he would win by switching. But he has 2 alternatives. So each one has only 2/3 divided by 2 = 1/3 chance of being the winner, which is no better than his original pick. Of course we already knew the final result, this just calculates it a different way.

But now the host reveals that one of those two choices is not the winner. So of the 2/3 chance that a door he didn't pick is the winner, he now knows that 1 of the 2 alternatives isn't it. The other might or might not be. So he no longer has 2/3 dividied by 2. He has zero for the open door and 2/3 for the closed door.

Jay
Very good analogies! I guess this is a very good plain english explanation, and unlike many others, you actually answered my question :)
Trufa
@Trufa @Jay The confusion among possible pre-knowledge of the events and randomness is VERY common. Let me share with you this interesting story about a woman who solved a problem and casted a pile of shame on some of the better mathematicians in academy. They said many things to regret later (such as "You made a mistake, but look at the positive side. If all those Ph.D.'s were wrong, the country would be in some very serious trouble."). So here is the story, related to your considerations ... enjoy! http://www.marilynvossavant.com/articles/gameshow.html
belisarius
@belisarius yep. I say blackjack21 :) just kidding I get you point!
Trufa
@belisarius BTW never got that one I will give it another try now!
Trufa
@Trufa And here is an article showing the academic reaction to Marilyn's statement http://query.nytimes.com/gst/fullpage.html?res=9D0CEFDD1E3FF932A15754C0A967958260 (VERY VERY fun)
belisarius
@belisarius: That's a wonderful example of why conditional probability is really non-obvious to most folks.
Donal Fellows
@Donal Fellows Without shame I must confess that Marylin's argument took me a while to grasp ... :)
belisarius
@belisarius: I got it straight away the first time I encountered the problem, but then I guess I don't view things quite the same as others. 'Tis how it goes, I suppose…
Donal Fellows
@Donal: Congratulations on superior intuition. You win the cookie.
Jay
@Jay: Cookie! Om-nom-nom!
Donal Fellows
Another way to help people understand the "Monty Hall" choiceis to use a deck of cards. With all the cards face down, askthem to pick the Ace of Spades. Once they have choosena card, flip the rest of the deck over except one other card.Ask them if them if they want to stick with their original choiceor switch cards. My bet is most people will quickly see themerit of changing cards (1 in 52 vs 1 in 2).
NealB
+7  A: 

Consider you have a simple coin flip problem where even is considered heads and odd is considered tails. The logical implementation is:

rand() mod 2

Over a large enough distribution, the number of even numbers should equal the number of odd numbers.

Now consider a slight tweak:

rand() * rand() mod 2

If one of the results is even, then the entire result should be even. Consider the 4 possible outcomes (even * even = even, even * odd = even, odd * even = even, odd * odd = odd). Now, over a large enough distribution, the answer should be even 75% of the time.

I'd bet heads if I were you.

This comment is really more of an explanation of why you shouldn't implement a custom random function based on your method than a discussion on the mathematical properties of randomness.

+1 nice illustration
Joris Meys
Beware! `rand()%2` may be not very random; that really depends on the randomness of the low bit, and some PRNGs aren't very good that way. (Of course, in some languages you get a floating-point result out of `rand()` so you can't do it that way at all…)
Donal Fellows
+2  A: 

multiplying numbers would end up in a smaller solution range dependend on your computer architecture.

if the display of your computer shows 16 digits rand() would be say 0.1234567890123 multiplied by a second rand() 0.1234567890123 would give 0.0152415 something you'd definitely find less solutions if you'd repeat the experiment 10^14 times

Huub
+3  A: 

It's not exactly obvious, but rand() is typically more random than rand()*rand(). What's important is that this isn't actually very important for most uses.

But firstly, they produce different distributions. This is not a problem if that is what you want, but it does matter. If you need a particular distribution, then ignore the whole “which is more random” question. So why is rand() more random?

The core of why rand() is more random (under the assumption that it is producing floating-point random numbers with the range [0..1], which is very common) is that when you multiply two FP numbers together with lots of information in the mantissa, you get some loss of information off the end; there's just not enough bit in an IEEE double-precision float to hold all the information that was in two IEEE double-precision floats uniformly randomly selected from [0..1], and those extra bits of information are lost. Of course, it doesn't matter that much since you (probably) weren't going to use that information, but the loss is real. It also doesn't really matter which distribution you produce (i.e., which operation you use to do the combination). Each of those random numbers has (at best) 52 bits of random information – that's how much an IEEE double can hold – and if you combine two or more into one, you're still limited to having at most 52 bits of random information.

Most uses of random numbers don't use even close to as much randomness as is actually available in the random source. Get a good PRNG and don't worry too much about it. (The level of “goodness” depends on what you're doing with it; you have to be careful when doing Monte Carlo simulation or cryptography, but otherwise you can probably use the standard PRNG as that's usually much quicker.)

Donal Fellows
This answer really needs to be read in conjunction with belisarius's magnificent one; they cover different aspects of the problem.
Donal Fellows
+4  A: 

The accepted answer is quite lovely, but there's another way to answer your question. PachydermPuncher's answer already takes this alternative approach, and I'm just going to expand it out a little.

The easiest way to think about information theory is in terms of the smallest unit of information, a single bit.

In the C standard library, rand() returns an integer in the range 0 to RAND_MAX, a limit that may be defined differently depending on the platform. Suppose RAND_MAX happens to be defined as 2^n - 1 where n is some integer (this happens to be the case in Microsoft's implementation, where n is 15). Then we would say that a good implementation would return n bits of information.

Imagine that rand() constructs random numbers by flipping a coin to find the value of one bit, and then repeating until it has a batch of 15 bits. Then the bits are independent (the value of any one bit does not influence the likelihood of other bits in the same batch have a certain value). So each bit considered independently is like a random number between 0 and 1 inclusive, and is "evenly distributed" over that range (as likely to be 0 as 1).

The independence of the bits ensures that the numbers represented by batches of bits will also be evenly distributed over their range. This is intuitively obvious: if there are 15 bits, the allowed range is zero to 2^15 - 1 = 32767. Every number in that range is a unique pattern of bits, such as:

010110101110010

and if the bits are independent then no pattern is more likely to occur than any other pattern. So all possible numbers in the range are equally likely. And so the reverse is true: if rand() produces evenly distributed integers, then those numbers are made of independent bits.

So think of rand() as a production line for making bits, which just happens to serve them up in batches of arbitrary size. If you don't like the size, break the batches up into individual bits, and then put them back together in whatever quantities you like (though if you need a particular range that is not a power of 2, you need to shrink your numbers, and by far the easiest way to do that is to convert to floating point).

Returning to your original suggestion, suppose you want to go from batches of 15 to batches of 30, ask rand() for the first number, bit-shift it by 15 places, then add another rand() to it. That is a way to combine two calls to rand() without disturbing an even distribution. It works simply because there is no overlap between the locations where you place the bits of information.

This is very different to "stretching" the range of rand() by multiplying by a constant. For example, if you wanted to double the range of rand() you could multiply by two - but now you'd only ever get even numbers, and never odd numbers! That's not exactly a smooth distribution and might be a serious problem depending on the application, e.g. a roulette-like game supposedly allowing odd/even bets. (By thinking in terms of bits, you'd avoid that mistake intuitively, because you'd realise that multiplying by two is the same as shifting the bits to the left (greater significance) by one place and filling in the gap with zero. So obviously the amount of information is the same - it just moved a little.)

Such gaps in number ranges can't be griped about in floating point number applications, because floating point ranges inherently have gaps in them that simply cannot be represented at all: an infinite number of missing real numbers exist in the gap between each two representable floating point numbers! So we just have to learn to live with gaps anyway.

As others have warned, intuition is risky in this area, especially because mathematicians can't resist the allure of real numbers, which are horribly confusing things full of gnarly infinities and apparent paradoxes.

But at least if you think it terms of bits, your intuition might get you a little further. Bits are really easy - even computers can understand them.

Daniel Earwicker
+1: Actually, there's more numbers missing between any two IEEE double precision floats than there are numbers in the whole of the (mathematical) integers.
Donal Fellows
+8  A: 

Oversimplification to illustrate a point.

Assume your random function only outputs 0 or 1.

random() IN 0|1, but random()*random() IN 0|0|0|1

Quite a difference.

Alin Purcaru
Simple but very effective :-)
DoctaJonez