There is a new Open Source poker bot called PokerPirate. I am interested in any creative ways in which a web application could detect/thwart/defeat a poker bot. (This is a purely academic discussion, in the same spirit that PokerPirate was written.)
There are three separate areas to consider. The bot has to figure out the state of the table, make a decision, and send the decision back to the host.
Figuring out the state of the table is much easier if it is sent across the wire in some recognizable form or displayed to the user as standard text. First, make image recognition the only option, then make it as hard as possible. Display the cards in 3D and slowly change the orientation and position of the cards. Animate little flickers or fireworks in front of the cards so any given screen shot may be illegible but it takes a while even to determine that.
There is nothing to be done with making the decision. Trying to decide if a decision was made by a human or not is like a turing test with almost no information.
Sending the decision back can be made difficult by using 3D again. Make it hard to send packets directly or otherwise submit a decision by any means other than clicking a button with the mouse. Move the buttons slightly with every action or have them float slowly around the play area while awaiting a decision. Disable any accessibility type features that allow buttons to be found or manipulated.
Ideally the only valid solution is to create a honeypot logic that lures an engaged bot by providing the temptation of a more favorable logic condition that favors the bot's most ideal behavioral responses. Once the bot is engaged in the honeypot you must continually feed the bot conditions that it prefers presuming the bot does not have a set timeout. Then the bot can be measured, logged, and studied. In addition to the bot you would also have the network and session data isolated for study provided the bot is not connecting via TOR.
In this situation deterministic considerations suited to differentiating a bot from a human are less severe, however, deterministic consideration upon identification of the bot's behavior become more severe. Unfortunately, the bot's owner can make changes to the bot to prevent such identification if that owner is aware of the honeypot condition, or the logical consideration thereof.
PokerPirate, like nearly every poker bot ever written, works by screen scraping and simulating mouse clicks in a Windows poker application. Therefore, the lynch-pin to the system is its ability recognize objects in the game and make actions in the window. As smart as it may be at poker, it likely still has trouble with these basic operations.
Therefore, Obvious ways to thwart this bot would include:
- Implement a CAPTCHA, either before the game, or when other factors suggest a player may be a bot.
- Make the table graphics more complicated, or change theme throughout the game.
- Detect unusually fast and/or robotic mouse movements and clicks (a human will never move a mouse in a mathematically perfect line).
A 100% solution is impossible, what I am purposing is a solution that will save money by using the AI against its self. Have an instance of PokerPirate's AI running on the server side and playing as an invisible player in every game. If any player performs too many identical actions then they are probably running an instance of PokerPirate. This is a kind of Honeypot or trap that the attacker can fall into. The attacker can defend against this honeypot by making their bot less successful. Thus this is creating a came of "Cat and Mouse" in which the attacker can always steal some money and the defender can always save some money.
Defeating poker bots can take two forms: you can try and identify them and ban them from the system, or you can just beat them at poker. Beating them at poker is the more interesting academic question. :-)
See here for some papers about beating poker bots: http://www.cs.cmu.edu/~sganzfri/
Defeating a bot from the serverside perspective
Many online poker sites use po-pup Captchca inputs that are triggered by suspicious activity.
Some poker sites monitor playing times and patterns (i.e., worst case scenario is a player who plays 24x7 and 16 tables continuously, there is a tiny tiny chance this is a real human. (However some players do have the ability to play very large hand volumes which to the inexperienced eye would appear to be a bot)
Throw it glitches. If you suspect a player is a bot, change all their playing card positions off a few pixels on the screen, make them different colours/designs/patterns for 1/100 hands and see if it throws them. If it can't screen grab it will time-out on all its decisions and that's pretty conclusive bot evidence.
Timing tells, if a computer player responds to options in milliseconds at a time without pause for thought on large decisions this could be suspicious
Self monitoring. The poker website pokertableratings.com data mines a lot of large sites. It has been met with a mixed reception, some love the transparency, others hate it. The benefit is, however, that there have been instances where suspicious player statistics (VPIP percentages, PFR percentages are a few of a large number of quantifiable statistics that can be recorded) have lead to conclusions of cheating
Artificially intelligent classification networks could monitor quantifiable statistics to classify rogue cheating or robotic players.
Back when online poker was a fairly new entity, there was rumour and talk with limited evidence that some poker client software screen-shots of suspicious players desktops to see if they were running programs that assist them. However (even if this were true) running two computers to perform the two tasks independently would get around this.
Sharing information between repeat offenders between multiple sites would be beneficial to the industry, if only they were honourable and run by competent responsible people
Some bots would probably be quite simple by design, if you could discover their playing style and see how they act in identical situations (note this is only possible with unsophisticated bots playing very basic strategy) you could discover them reasonably quickly.
Inconsistent use of program features would lean towards a player being genuine. Take for example many poker sites in game have a 'Fold when it's my turn' button. If you get dealt a bad hand and are waiting for another player to decide what to do, a lot of players will check this button. A bot may use these buttons. The difference is, a bot would be on the extremities of frequency of use, they would probably either use them all the time, or not at all. Wheras a player might usually press 'autofold', but sometimes they will click fold anyway even in the most favorable conditions. For example, a genuine player usually presses auto fold but this time they don't. It's folded round to them with no other player acting, now they have been presented with the most favourable condition possible. Now if they press fold, they would have been heavily inclined to press autofold from the start. This is inconsistant/unoptimised/random behaviour, consistant with being a human. Timing tells on when these features are clicked are other indicators. It is important to recognise that these are all indicators and not conclusive proof. All of these behavioural indicators can be simulated easily.
Defeating a bot from a players perspective
Try to log and collect as much data as possible using software like PokerTracker
Attempt to identify patterns in its playing style
Attempt to find relationships between bet size in proportion to pot/# players and hand strength
Try to calculate its hand ranges. A low stakes bot probably wont be bluffing frequently enough to be of any significant strategic concern, so constructing highly accurate hand ranges for it shouldn't be too tricky.
Attempt to find leaks in its game via data analysis and trial and error Once leaks/patterns have been found, attempt to repetitively exploit them and avoid any other situations.
Where a human is capable of adaptation, bots probably are less so, and where humans are weighted by the chains of tilt, results orientated thinking and frustrations, bots are not. You can use this to your advantage.
So in essence there is nothing you can do to stop it if the robot is clever enough to simulate real timing delays during decisions, as well as create reasonable and realistic playing patterns. Throw in some random conditions and simple back-chat (the poker players lexicon is usually fairly limited) and you have yourself a AI player that's going to be pretty hard to detect.
What bots might do to avoid detection
The key to avoid detection would be to think about the problem from as many angles as possible. You are attempting to simulate intelligent human behaviour in a very small and restricting world. Most of the behavioural simulations you can run are fairly obvious, but the more inconsistant and unpredictable your bot is, the less likely it is to be discovered.
Create realistic playing schedules (i.e., 3–5 times a week, 4 hours per session with the odd week here and there off during the year).
Run programs to make decisions on separate computer, controlling a zombie computer in case any sites screen capture.
Randomise action timings (don't act immediately, wait 0.5–2 seconds per action)
Time down on big decisions. If a decision is borderline, calculate the decision then wait a while to simulate thought.
Random use of client software features. Simulate toilet breaks by clicking the "deal me out button" on all the tables and have a 5 minute break every now and then.
Simulated chat, poker chat is often very simple one liners, never usually discussion or debate. Say things like "unlucky" or "stfu" at appropriate detectable moments. Or even have the coder monitoring his bot and engaging in chat during execution.
Ensure mouse movements are realistic. If tables are tiled don't make a decision on top left table then instantly make on on bottom right table. Most sites software now offer keyboard shortcuts, these may be preferable to use as supposed to mousemovement.
Do things that quite simply AI classifiers wont be expecting. For example, once a year phone them up with a simple non-complex query ("Help I can't log in today!" or "The Internet is down!") Unlikely to make much difference, but if the person working for the poker company is smart enough they might have recognised it as a realiable indicator.
Sporadic losing sessions. Tilt can be simulated and the bot can play badly and lose some money every now and then. Everybody tilts at some point.
The concern is also that poker websites don't particularly care if bots are running on their networks, each player is worth a large amount in rake and theoretically from a purely cynical business point of view the only downside would be bad press if it was discovered.
Even when blatant exploits have been discovered, (search on google for Cereus network scandals or Absolute Poker Scandal, it's quite shocking) the business appears to survive and remain healthy, only losing well educated and winning players (of which there are not many). This increases the proportion of less skilled players to the network, which in turn attracts the good players back. It's a good ol' fashioned catch 22. An excellent argument for proper market regulation.
It is important to note, that for every game a nash equilibrium exists. Online poker has a timeline to it the way it runs now, it's going to have to move into something more social (webcam/voip) for anyone to trust it in the future (if people trust it) as bots will take over eventually as mathematically superior, and psychologically immune. The poker AI community is very active, fuelled by academia and/or capital benefit.
Simpler versions of poker such as limit poker have been very nearly solved in small search spaces. It's only a matter of time before more complex versions of the game (No Limit variations/Pot Limit Omaha etc) become beatable for artificial players.
Conclusion
Sophisticated bots just can't be detected until the industry shifts to a more social online gaming setting. This won't solve the problem, but will certainly make it harder for bots to win at the lower levels. We've already seen a slight shift with the release of PKR, 3D and a more interactive, less hands per hour version of the other sites where multitabling is quite tricky to accomplish for a player.
The problem also suffers from the nature of the industry, yet another reason to stick to the larger more reputable websites where reputation has become more and more integrated into their business model. Lack of transparency and feigned transarancy don't help the cause.
The real challenge currently for bot developers is to write a winning algorithm, this is not as trivial as it seems. Everyone who plays poker considers themselves good, winning or a break even player, which is simply not true. That is why people continue to play, even when they lose money as they are under the illusion they are simply unlucky, or their style of play is misunderstood. This arrogance and weakness in human psychology has cost losing players a lot of money and is the fundamental reason that poker can still be profitable.
Poker is a vastly complicated game that takes years to get good at (The old adage remains true, "Ten minutes to learn, a lifetime to master"). The luck element is extremely limited in the long term.
Like any other profession, to get good, you need to study for hundreds upon hundreds of hours, and play for many thousands. You will understand things that less experienced players wont understand, spot things the less experienced wont spot. The learning goes on for a very very long time, perhaps longer than we can ever live. It's a complicated game.
How often have you seen a high stakes cash game on the television and heard someone shout at it "That's an easy call!" thus prooving that amateurs really don't understand or recognise sophistication in play, and truly beleive the game at that level is still ultimately simple. It isn't. Those high stakes players (a lot of the time) are there on the television because they are really really really good. There is also probably a complicated meta game being played as well, which our amauer can't recognise the existance of. The amatuer wouldn't stand over a chess master and shout at them to move their knight, yet because of the dynamic of poker being imperfect information their psychology makes them truly beleive what they are saying. Like in chess, decisions can be intricate, sensitive and extremely important to the overall game. As the game increases in complexity, trivial decisions are not so trivial anymore, because your opponent expects them.
Once you move your bot or your game up the levels, you inevitably will come across a larger populous of more skilled players. Then, the complexity of your strategy is going to have to go up to the next level, taking into account table images, range balancing, sophisticated and intelligent bluffing (IE not just bluffing at weakness, bluffing at ranges and bluffing on image etc), with more detailed hand range analysis. It really is a different game as you move up.
Once a winning bot has been written, without doubt the coder will have enough skill, knowledge and common sense to apply the bot in an undetectable fashion. This is trivial for them.
So there really is nothing you can do. If you want to play online, understand the risks. Never risk more money than you can afford, and attempt to keep accurate records of spending so you don't have a misguided, unrealistic and ultimately damaging over estimation of your own ability. Have stop losses, and leave the table if you don't have an edge, or if you are unsure if you have an edge! Of course, if everyone did that no one would win, that's the predatory and exploitative nature of the game, that's where the competition comes from and that's what makes it fun.
Another thought on messing with the screen to make it hard to scan:
Make the card out of a whole slew of different colors--close in human eye terms but not the same. This would make it harder to pick out the stuff to read. On the flip side, put fake writing on the card in colors that the human eye won't separate from the background.
Is the problem with bots the fact that they play better than decent human players, or that they can wait around 24/7 for bad players to appear and then try to milk them?
Also, would it be "legitimate" or "cheating" for someone to have a computer sitting next to him while he played poker, consulting that other computer for advice?
I'm not sure how one can claim the solution space for limit poker is "solved" when the optimal strategy for a player will be influenced by what is known about the opponents. How can any attempt at analyzing players claim to be so perfect that it could not be improved?
If you have access to a lot of matches, you can take a data mining approach. The playing strength of an AI should be pretty consistent, while there are probably simple patterns for humans - weaker in the first few warm-up rounds, and strength deteriorates after playing for a long time. Also, human decision times probably go up when there is more money at stake.
If you have access to mouse moves (or at least click locations which is true even for web apps), it should be fairly simple to recognize bots, except for the most sophisticated ones. Humans don't move the mouse in an exact straight line, they have speedup and slowdown periods, statistically describable click location distributions, etc.
There are much easier ways. yes. a lot of the suggestions are right, and needed. but about 90% of the frauds are detected in too simple manner.
if someone let a bot work for him, he will, after some time, want that a second bot is working for him. (another machine or whatsoever) but: he will use the same password, as it is hard to remember 2 (<--- sarcastic)
whats left: check the accounts with same game behavior and same password hash.
Have a look at Ajax Control Toolkit NoBot:
NoBot employs a few different anti-bot techniques:
* Forcing the client's browser to perform a configurable JavaScript calculation and verifying the result as part of the postback. (Ex: the calculation may be a simple numeric one, or may also involve the DOM for added assurance that a browser is involved)
* Enforcing a configurable delay between when a form is requested and when it can be posted back. (Ex: a human is unlikely to complete a form in less than two seconds)
* Enforcing a configurable limit to the number of acceptable requests per IP address per unit of time. (Ex: a human is unlikely to submit the same form more than five times in one minute)