can it be a random number generator? Could not tell from the code what number generator is used.5
I can't say for certain as I obviously can't see the entire program source, but my first suspicion upon seeing the differing equities is that for your Monte Carlo simulation you simply aren't running enough trials to get a good evaluation of the hand's equity.
It could very well be some other problem but it is simple to figure out if this is the cause of your discrepancy. I would try running a very large number of trials (in excess of 1 million) multiple times and see how widely your equities vary.
For problem #1 I don't think the bias is intrinsic to the problem but rather to your implementation.
What I mean is that if you deal an infinite number of hands, dealing first the board cards and then the player hands (*), and only consider the "deals" where one hand is AA and the other is KK the equity should be the same as if you deal an infinite number of hands, dealing first the player hands and then the board cards, and again only consider the "deals" where one hand is AA and one is KK.
When you first select the player hands from a discrete set of hands you restrict the cards that can be placed on the board.
If you place the board cards first you have no restriction and if you after this randomly select a pair of AA/KK hands until you don't get a collision, you have the analgoue of (*)
I'll see if I can elaborate a little more.
I think that you shouldn't use collisions. They are very inefficient, especially when a lot of them occur, and when you try to reduce their number, it is very likely that you introduce a bias: you simulate a distribution by the probability of collisions, so having the complete set of possible collisions is necessary. Any reduction must keep the distribution the same, so this is like reducing a fraction.
Problem 1: How did you determine that the first set of equities is correct? Independent sources? I would assume that first picking the hole cards, then the board cards from the remaining cards would be "obviously" correct (or is this naive?), as long as picking the hole cards can be done independently (see problem 2).
Problem 2: Hand (hole card) distributions are not independent when the hand ranges overlap. If one player has AK+, the other hands unknown, the situation is different from where one player has AK+, and another AA: in the second case, it is more probable that the first player actually has KK than in the first case. In your example of ten hands, the player with 55+ is very unlikely to have something like KK. Again, how did you determine the correct equities?
I'm sorry that I can't give you a conclusive answer to your problems, but I think that you need to do or read about some involved statistical analysis to deterministically produce correct hand distributions and ensure that they are also independent of the order of players.
EDIT: Thank you for posting something that lets us gauge the kind of code you are working with. As harsh as it may sound, I now tend to advise you to start from scratch, but first learn about structured programming.
Just a few hints: You currently seem to represent a set of cards as a 52-element bit-field. While this seems efficient at first, you see that just dealing cards from such a deck is very complicated. So, keep it simple: make a class that describes a card; make a class that describes a deck of cards; implement a Fisher-Yates shuffle for that; give it a deal()
method that returns a card, etc.. Oh, and don't use pass-by-reference, use return values instead. You want to see where values get modified, without diving into each subroutine.
EDIT 2: Regarding "I cannot use a class, it is too slow": What gives you the idea that a using classes makes your code slow? Classes are just a compile time concept in C++ (approximately). Even if you don't want to use classes or structs, use proper data structures like arrays; this is not less efficient than bit fiddling.
Your problem is that in order to get a number of random card from a deck, you batter it with sets of random requests until one succeeds completely, instead of just requesting that certain number of the unused cards. The algorithmic complexity of this approaches infinity as the number of available cards decreases.
This is what the famous quote "Premature optimization is the root of all evil" is about: you prematurely "optimized" your data representation, and now you can't efficiently get random cards anymore. Get your algorithm right first, then look for bottlenecks (that means testing and measuring, a.k.a. profiling) and optimize those.
Andreas Brinck replied to your problem #1.
I think that your problem #2 is caused by the same issue.
Instead of detecting collisions, just simulate a stack of cards and pick in the remaining cards. If you do not do this, you have to be careful about the probabilities or you may cause a bias in the conditional probabilities depending of your algorithm.
By the way, this might be of some help for you: