The catch is that fd_set is not really a "set" in the way you're thinking. The behind-the-scenes detail is that the implementation of an fd_set is just an integer that is used as a bitfield. In other words, executing
fd_set foo;
FD_CLEAR(&foo);
FD_SET(&foo, 3);
Sets foo to decimal value 8 - it sets the fourth-least-singificant bit to 1 (remember that 0 is a valid descriptor).
FD_SET(&foo, 3);
is equivalent to
foo |= (1 << 3);
So in order for select to work right, it needs to know which bits of the fd_set are bits that you care about. Otherwise there would be no way for it to tell a zero bit that is "in" the set but set to false from a zero bit that is "not in" the set.
In your example, a fd_set with 4, 8, and 9 set and n = 10 is interpreted as "A set with 10 entries (fds 0-9). Entries 4, 8, and 9 are true (monitor them). Entries 1,2,3,5,6,7 are false (don't monitor them). Any fd value greater than 9 is simply not in the set period."