I think your two-pass algorithm is likely to be the best you can do, given the constraint you added in a comment that you don't know in advance which cards are eligible for a given draw.
You could try the cunning "select at random from a list of unknown size in a single pass" algorithm:
int sofar = 0;
int selected = -1;
for (i = 0; i < 52; ++i) {
if (used[i]) continue;
++sofar;
if ((rand() % sofar) == 0) selected = i;
}
if (selected == -1) panic; // there were no usable cards
else used[selected] = 1; // we have selected a card
Then if (as you say in a comment) different draws have different criteria, you can replace used[i]
with whatever the actual criteria are.
The way it works is that you select the first card. Then you replace it with the second card with probability 1/2. Replace the result with the third card with probability 1/3, etc. It's easy to prove by induction that after n steps, the probability of each of the preceding cards being the selected one, is 1/n.
This method uses lots of random numbers, so it's probably slower than your two-pass version unless getting each item is slow, or evaluating the criteria is slow. It'd normally be used e.g. for selecting a random line from a file, where you really don't want to run over the data twice. It's also sensitive to bias in the random numbers.
It's good and simple, though.
[Edit: proof
Let p(j,k) be the probability that card number j is the currently-selected card after step k.
Required to prove: for all n, p(j,n) = 1/n for all 1 <= j <= n
For n = 1, obviously p(1,1) = 1, since the first card is selected at the first step with probability 1/1 = 1.
Suppose that p(j,k) = 1/k for all 1 <= j <= k.
Then we select the (k+1)th card at step (k+1) with probability 1/(k+1), i.e p(k+1,k+1) = 1/(k+1).
We retain the existing selection with probability k/(k+1), so for any j < k+1:
p(j,k+1) = p(j,k) * k/(k+1)
= 1/k * k/(k+1) // by the inductive hypothesis
= 1/(k+1)
So p(j,k+1) = 1/(k+1) for all 1 <= k <= k+1
Hence, by induction, for all n: p(j,n) = 1/n for all 1 <= j <= n]