views:

326

answers:

5
A: 

can it be a random number generator? Could not tell from the code what number generator is used.5

aaa
I'm using Boost's lagged_fibonacci607. It's the same behavior with mersenne twister.
nhaa123
A: 

I can't say for certain as I obviously can't see the entire program source, but my first suspicion upon seeing the differing equities is that for your Monte Carlo simulation you simply aren't running enough trials to get a good evaluation of the hand's equity.

It could very well be some other problem but it is simple to figure out if this is the cause of your discrepancy. I would try running a very large number of trials (in excess of 1 million) multiple times and see how widely your equities vary.

marco0009
Already done this. Ran ~100-300 million trials in both way. If it chooses hole cards first, it doesn't work (biased). If it chooses board cards first, it works.
nhaa123
I hesitated to bring it up at first because the first situation you mentioned stated you were getting the correct equity. But I'll go ahead and ask now. Are you sure you're using an unbiased method for shuffling?
marco0009
I believe I am. I'm generating all possible boards to an array (~2.5 million outcomes) and choosing one with lagged fibonacci / mersenne twister.
nhaa123
+1  A: 

For problem #1 I don't think the bias is intrinsic to the problem but rather to your implementation.

What I mean is that if you deal an infinite number of hands, dealing first the board cards and then the player hands (*), and only consider the "deals" where one hand is AA and the other is KK the equity should be the same as if you deal an infinite number of hands, dealing first the player hands and then the board cards, and again only consider the "deals" where one hand is AA and one is KK.

When you first select the player hands from a discrete set of hands you restrict the cards that can be placed on the board.

If you place the board cards first you have no restriction and if you after this randomly select a pair of AA/KK hands until you don't get a collision, you have the analgoue of (*)

I'll see if I can elaborate a little more.

Andreas Brinck
I was about to post exactly the same thing than you.The correct way to do this, is to simulate the stack of remaining cards and deal them in the same way than your game.
BlueTrin
A: 

I think that you shouldn't use collisions. They are very inefficient, especially when a lot of them occur, and when you try to reduce their number, it is very likely that you introduce a bias: you simulate a distribution by the probability of collisions, so having the complete set of possible collisions is necessary. Any reduction must keep the distribution the same, so this is like reducing a fraction.

Problem 1: How did you determine that the first set of equities is correct? Independent sources? I would assume that first picking the hole cards, then the board cards from the remaining cards would be "obviously" correct (or is this naive?), as long as picking the hole cards can be done independently (see problem 2).

Problem 2: Hand (hole card) distributions are not independent when the hand ranges overlap. If one player has AK+, the other hands unknown, the situation is different from where one player has AK+, and another AA: in the second case, it is more probable that the first player actually has KK than in the first case. In your example of ten hands, the player with 55+ is very unlikely to have something like KK. Again, how did you determine the correct equities?

I'm sorry that I can't give you a conclusive answer to your problems, but I think that you need to do or read about some involved statistical analysis to deterministically produce correct hand distributions and ensure that they are also independent of the order of players.

EDIT: Thank you for posting something that lets us gauge the kind of code you are working with. As harsh as it may sound, I now tend to advise you to start from scratch, but first learn about structured programming.

Just a few hints: You currently seem to represent a set of cards as a 52-element bit-field. While this seems efficient at first, you see that just dealing cards from such a deck is very complicated. So, keep it simple: make a class that describes a card; make a class that describes a deck of cards; implement a Fisher-Yates shuffle for that; give it a deal() method that returns a card, etc.. Oh, and don't use pass-by-reference, use return values instead. You want to see where values get modified, without diving into each subroutine.

EDIT 2: Regarding "I cannot use a class, it is too slow": What gives you the idea that a using classes makes your code slow? Classes are just a compile time concept in C++ (approximately). Even if you don't want to use classes or structs, use proper data structures like arrays; this is not less efficient than bit fiddling.

Your problem is that in order to get a number of random card from a deck, you batter it with sets of random requests until one succeeds completely, instead of just requesting that certain number of the unused cards. The algorithmic complexity of this approaches infinity as the number of available cards decreases.

This is what the famous quote "Premature optimization is the root of all evil" is about: you prematurely "optimized" your data representation, and now you can't efficiently get random cards anymore. Get your algorithm right first, then look for bottlenecks (that means testing and measuring, a.k.a. profiling) and optimize those.

Svante
Hi. I determined the equities by enumerating all possible outcomes. Also, compared the results to PokerStove (AA-KK: 81.95% - 18.05%).Regarding the second problem, I've compared the results against PokerStove and Holdem Ranger. I'll put a pseudo-code to the original post how to evaluation goes.
nhaa123
I cannot use a class for a card. It's way too slow. I need to have at least ~1-5 million evaluations per second. On top of that, I'm using 2+2 evaluator. In other words, it's about 100Mb array containing all possible 7-card combinations with their ranks.This whole thing works just as it should. However, I've only got those two problems which I cannot understand. I know it has something to do with statistical mathematics.
nhaa123
I put the code as it is.
nhaa123
The thing is that the algorithm works, as I said. It's only when I change the order how cards are selected (board cards -> hole cards, hole cards -> board cards). The project it in it's final steps and the optimization process is ongoing. I've already profiled the code etc. Please re-read my original question.
nhaa123
It "works" perhaps, but you will never get millions of evaluations in general this way. As you wrote, you get a lot of collisions when more cards are already used. This is unavoidable with your data structure. Your data structure is an "optimization" gone wrong.
Svante
Well, currently, this get ~150k - 1.6 million evaluations per second, which is fine (C2D E4500).
nhaa123
Except when you have more "dead cards".
Svante
A: 

Andreas Brinck replied to your problem #1.

I think that your problem #2 is caused by the same issue.

Instead of detecting collisions, just simulate a stack of cards and pick in the remaining cards. If you do not do this, you have to be careful about the probabilities or you may cause a bias in the conditional probabilities depending of your algorithm.

By the way, this might be of some help for you:

BlueTrin