views:

1082

answers:

3

I am having trouble getting rand() to work in C++. rand() normally gives me a very large number. When I try to use the modulo operator (%) to give it a range, it always returns 0, no matter what. Seeding the random number generator at the beginning of the program doesn't help either.

+5  A: 

The following code:

#include <cstdlib>
#include <ctime>
#include <iostream>

int main()
{
  std::srand(time(0));
  std::cout<<(std::rand() % 1000)<<std::endl;
  return 0;
}

works just fine for me (emitting a random number between 0 included and 1000 excluded each time it's run). How does your code differ? Please post the smallest, most simplified version of your code that exhibits your problem (by editing your question), thanks!

Alex Martelli
Problem solved! I wasn't using #include <ctime> when I seeded the random number generator! Thanks a lot!
Zachary Hinchliffe
@Zachary: You probably want to turn up the warnings on your compiler. That's the sort of thing you can get the compiler to catch.
Michael Kohne
+3  A: 

I think this is common if the random generator algorithm leaves a certain pattern of bits as zero. (For example, if the low-order bits are zero, the number mod some low constant will always be zero.)

Maybe you should try something like:

const int desired_maximum = /* ... */;

int r = (((double)rand()) / RAND_MAX) * desired_maximum;

For example in this manpage link, it says:

In Numerical Recipes in C: The Art of Scientific Computing (William H. Press, Brian P. Flannery, Saul A. Teukolsky, William T. Vetterling; New York: Cambridge University Press, 1992 (2nd ed., p. 277)), the following comments are made:

"If you want to generate a random integer between 1 and 10, you should always do it by using high-order bits, as in

j = 1 + (int) (10.0 * (rand() / (RAND_MAX + 1.0)));
and never by anything resembling
j = 1 + (rand() % 10);
(which uses lower-order bits)."
asveikau
It wouldn't be a very good PNRG if it always returned multiples of N :-)
paxdiablo
Please don't blindly recommend the high-order bits. This is entirely dependent on the PRNG used. Numerical Recipes is a fine book but especially the older editions are full of dangerous advice. This is one of those instances. Besides, even those C/C++ implementations that use an LCG *do* throw away the low-order bits. People have learned since the 70s.
Joey
@paxdiablo, if *N* is 1, then I'd have no problem with that ;-)
Joey
Well, in that case you don't really need a RNG; there's just not that many choice in numbers between 1 and N when N==1 ;)
MSalters
@msa: well, it returns *multiples* of *N*, so 1 is actually the best you can get there :-)
Joey
+2  A: 

It appears that your immediate problem has been dealt with, but I think it's still worth mentioning one more point. Using the remainder to clamp outputs from rand to a specified range will usually cause bias in the results. To be specific, if the range of the generator (RAND_MAX in the case of C or C++) isn't a multiple of the range you're clamping to, some outputs will occur more often than others. For comparison, consider trying to divide 11 candies evenly between 3 children (without breaking any into pieces). The only way you can do it is NOT hand out some of the candy. Likewise, with a random number generator, the only way to get an even distribution in your output is not use some of the inputs.

int rand_lim(int limit) {
/* return a random number between 0 and limit inclusive.
 */
    int divisor = RAND_MAX/(limit+1);
    int retval;

    do { 
        retval = rand() / divisor;
    } while (retval > limit);

    return retval;
}

As asveikau pointed out in his post, you generally want to use the upper bits from a typical linear congruential generator. The lower bits are generally more predictable. By dividing instead of taking a remainder, this retains upper bits. Though this does have a while loop, it often executes only once, and rarely more than twice, so its impact on performance is pretty minimal.

Whether this is worthwhile depends on the use you're making of the numbers you generate -- if you want dice or cards for a game that your kids will play a couple of times, using a remainder generally won't hurt a thing. If you're trying to do some sort of Monte Carlo simulation (for example) you probably want to be a bit more careful though...

Jerry Coffin
Java's `java.util.Random.nextInt(int)` also has a very interesting implementation of an unbiased choice of a random number between 0 and *n*. As for the high-order bits ... see my comment on asveikau's answer. I really haven't seen a libc in a while that handed out the real low-order bits of the state word. And there are generators that have weak high-order bits. C doesn't guarantee you an LCG for `rand()`.
Joey
@Johannes:C doesn't guarantee much of anything about rand() -- in fact, at one time it had a really poor generator as an example, and wording that apparently convinced at least a few implementers that a better generator wasn't allowed. Nonetheless, lower-order bits being worse than higher order bits is sufficiently common that keeping the higher order bits is a good rule of thumb. While some implementations of rand() throw out some lower order bits, 1) it's often insufficient, and 2) a few still don't.
Jerry Coffin
And a few generators have weak high-order bits. Where results will be worse than what you began with. The best advice for C would probably be to avoid the built-in RNG completely since it isn't specified anywhere and you could get anything from WELL to RANDU. Still, a lot of people don't understand anything about random numbers and generalizations of the "Never do this" sort are harmful in those cases. In general meddling with generators without a clue only makes matters worse, not better.
Joey