Types of poker bots: How they see, click, think and decide
Not all poker bots are the same. Behind the generic term “poker bot” hides a range of technologies that differ in how they read the game, interact with the poker app, compute decisions, and choose their strategy. Understanding these differences is the key to choosing the right tool — or recognizing one at your table.
For: players evaluating bot options; farmers choosing technology for scaling; club owners understanding threats; anyone who wants to go beyond marketing buzzwords and understand how poker bots actually work.
Why you need to understand bot types
There are dozens of solutions on the market. Sellers promise “the best AI” and “GTO strategies,” but behind the marketing often hides a primitive rule-based bot with hardcoded rules.
Understanding bot types helps you:
- Avoid overpaying for outdated technology
- Choose the right solution for your specific goals
- Assess real capabilities and limitations
- Understand what you’re up against if you encounter bots at your table
Four dimensions of bot classification
Most discussions about “bot types” focus only on the decision-making approach — rule-based vs AI. But that’s just one of four key dimensions:
- How the bot reads the game — the technology used to understand what’s happening at the table
- How the bot interacts with the app — the method used to click buttons and perform actions
- Where decisions are computed — locally, on a remote server, or both
- How the bot decides what to do — rules, solver lookups, neural networks, or a combination
Each dimension affects performance, detection risk, and scalability. A bot with a powerful AI brain but crude screen scraping will break every time the poker room updates its client. A bot with perfect stealth but a rule-based strategy will lose money over the long run. The combination matters.
How the bot reads the game
Before making any decision, the bot needs to understand the current game state: cards, pot size, positions, available actions. There are several approaches — from simple to sophisticated.
Screen scraping (template-based)
The oldest and most widespread method. The bot captures screenshots of the poker client and compares pixel patterns against pre-made templates called “table maps.” Each map defines rectangular regions on the screen — where the cards are, where the pot is displayed, where the buttons appear — and uses pattern matching or hashing to identify them.
Examples: OpenHoldem (open source, uses Bob Jenkins hashing for card recognition), Shanky, Warbot, Inhuman.
Pros: non-invasive (doesn’t modify the poker client), works with any poker room if you have the right table map, large open-source community.
Cons: extremely brittle — breaks every time the poker room updates its interface or changes fonts. Every room and table theme needs its own table map. Vulnerable to anti-bot countermeasures like font scrambling and pixel randomization.
AI-based screen recognition
An evolution of screen scraping that replaces rigid pixel matching with machine learning. A trained neural network (CNN, YOLO) recognizes cards and UI elements even when the interface changes. Some implementations use multimodal LLMs (GPT-4V) to interpret entire game screenshots in one pass.
Pros: more resilient to UI changes, doesn’t require pixel-perfect table maps, can be retrained quickly.
Cons: requires GPU for real-time inference, needs training data per platform. LLM-based approaches add API latency and cost.
Traffic interception (MITM)
The bot intercepts network traffic between the poker client and server via a man-in-the-middle proxy. By decrypting the SSL/TLS connection, it gets structured game data directly — no OCR errors, no pixel matching. Requires reverse-engineering the client’s network protocol and bypassing certificate verification.
Pros: perfectly accurate structured data, immune to visual interface changes.
Cons: modern clients use certificate pinning, binary integrity checks, and obfuscated protocols. Breaks with every protocol update. The most legally and ethically problematic approach.
Memory reading
The bot reads game state directly from the poker client’s process memory (RAM) — either externally via OS APIs (ReadProcessMemory on Windows) or by injecting a DLL into the client process. Can also hook internal drawing functions (DrawTextEx, ExtTextOut) to intercept all text the client renders on screen.
Pros: extremely accurate, low CPU overhead, can access data not visible on screen.
Cons: the most invasive approach — easiest for anti-cheat to detect. Clients scan for injected DLLs, verify memory integrity, and block external process access. Breaks with client updates that change memory layouts or function names.
Direct protocol emulation
The most advanced approach: the bot replaces the poker client entirely and communicates with the server using a fully reverse-engineered protocol. No screen to read, no client to interact with — the bot is the client.
Pros: runs headless on servers, massively scalable, fastest possible data flow.
Cons: enormous development effort (full protocol reverse engineering), breaks with every server update, missing client telemetry (mouse events, window focus, performance metrics) can trigger detection. Reserved for large-scale bot operations.
Hand history parsing
Most poker clients write hand histories to local files in real time. The bot monitors these files and parses game data as new hands are recorded. This is the same technology behind HUD software like PokerTracker and Hand2Note.
Pros: virtually undetectable, simple to implement, reliable data.
Cons: hand histories are typically written after the hand completes — not suitable for real-time decision-making. Used as a supplementary source for opponent profiling and statistics.
| Method | Accuracy | Stealth | Resilience to updates | Scalability |
|---|---|---|---|---|
| Screen scraping (template) | Medium | High | Low | Good |
| AI screen recognition | High | High | Medium | Good |
| Traffic interception (MITM) | Very high | Medium | Low | Poor |
| Memory reading | Very high | Low | Low | Poor |
| Protocol emulation | Very high | Variable | Low | Excellent |
| Hand history parsing | High | Very high | Medium | Good |
How the bot interacts with the poker app
Once the bot knows what to do, it needs to execute the action — click a button, enter a bet size, fold. The interaction method directly affects detection risk.
Software input emulation
The most common approach. The bot uses OS-level APIs (SendInput on Windows, xdotool on Linux) to simulate mouse movements and clicks. Frameworks like AutoHotkey and PyAutoGUI make this accessible even to beginners.
Detection risk: moderate to high. The operating system marks software-injected events with a special flag (LLMHF_INJECTED on Windows) that poker clients can detect via low-level mouse hooks. Mouse movement patterns (straight lines, uniform speed, constant click duration) and action timing are additional giveaways. A cruder variant — PostMessage — sends messages directly to the window without generating real input events, making detection even easier.
Hardware input emulation
Uses physical devices — Arduino/Teensy microcontrollers, dedicated hardware like KMBox, or kernel-level virtual drivers (Interception) — that present themselves as standard USB mice and keyboards. The operating system receives genuine hardware input through the normal HID driver stack, with no software injection flags.
Detection risk: low. Events are indistinguishable from real hardware at the OS level. The device can spoof its USB vendor/product ID to appear as a common brand mouse. Main vulnerability: behavioral analysis of movement patterns, which still need to look human. Also, anti-cheat systems can theoretically enumerate connected USB devices and flag unusual hardware.
Mobile touch emulation
For mobile poker apps running natively or in Android emulators (LDPlayer, BlueStacks, NoxPlayer), bots use ADB commands, low-level kernel input injection (sendevent), or Android Accessibility Services. The sendevent approach allows controlling touch pressure and contact area — details absent from simple ADB taps.
Detection risk: moderate. Poker apps increasingly detect emulator environments (checking device fingerprints, sensors, battery behavior, screen resolution), root/ADB access, and active accessibility services. Real fingers produce variable pressure and touch area that simulated taps lack.
Traffic injection and protocol commands
Paired with MITM or protocol emulation: the bot sends action commands directly through the network, bypassing the UI entirely. No mouse movements to humanize, no click timing to optimize — the action is transmitted as a data packet.
Detection risk: variable. No UI-level detection is possible, but server-side protocol analysis (sequence numbers, timing, TLS fingerprinting) and missing client telemetry can flag the connection.
Full breakdown in the article “How Rooms Catch Bots: Detection Methods 2026”
Where decisions are computed
The computing architecture determines what strategies are feasible and how the bot scales.
Local (on-device)
Everything runs on the user’s PC or smartphone. The bot reads the screen, computes the decision, and executes the action — all on one machine.
Pros: zero network latency, no server dependency, user data stays local.
Cons: limited by hardware — you can’t run a real-time GTO solver or a large neural network on a budget laptop. No cross-user opponent data sharing. Strategy updates require downloads to each machine.
Remote (cloud/server)
The bot client on the device captures game state and sends it to a powerful remote server for decision computation. The server returns the optimal action; the client executes it.
Pros: unlimited computational power, centralized opponent database aggregating data across all users, instant strategy updates deployed server-side.
Cons: network latency (100-500ms per decision), server downtime affects all users, regular network traffic to an external server during poker sessions can be flagged.
Hybrid: Brain + Clicker
The dominant architecture for modern AI bots. A lightweight Clicker runs on the user’s device — it reads the poker app, sends game state to the server, receives the decision, and executes it. The heavy Brain runs on dedicated server infrastructure — neural network inference, opponent database lookups, strategy computation.
Common preflop decisions can be cached locally for instant response. Complex postflop situations get full server-side analysis. If the connection drops, the bot falls back to cached decisions gracefully.
This is the architecture PokerBotAI uses: the Clicker handles interaction with the poker app on your device, while the Brain processes decisions on dedicated servers in milliseconds.
How the bot makes decisions
This is the dimension most people think of when they hear “bot types” — the strategy engine. It has evolved dramatically over two decades, from hand charts to neural networks trained on billions of hands.
Rule-based (profile-based)
The oldest type. The bot follows pre-written rules and hand charts: “If hand is AA and position is late — raise 3bb.” Advanced profiles add thousands of conditions, action randomization, stack-depth adjustments, and position-aware logic. Some even include pseudo-randomization and ICM-aware tournament modes.
Examples: Shanky (BonusBots), OpenHoldem (open source), Warbot, Inhuman.
Even the most sophisticated rule sets hit a fundamental ceiling: No-Limit Hold’em has approximately 10160 possible game states — no hand-authored rule set covers even a meaningful fraction. The bot never learns, never adapts. After 500-1000 hands, patterns become visible to attentive opponents and anti-bot systems alike.
Pros
- Quick launch — pick a profile and start
- Low cost and minimal hardware requirements
- Full customization if you understand the rule syntax
- Wide range of historically supported rooms
Cons
- Predictable and easily exploited by adaptive opponents
- No adaptation — the strategy never changes regardless of who’s at the table
- Negative long-term win rate against regulars and AI bots
- High detection risk — fixed patterns are easy to fingerprint
Solver-based (GTO lookup tables)
Instead of hand-written rules, the bot uses pre-computed solutions from GTO solvers (PioSolver, GTO+, MonkerSolver) as lookup tables. For each game state, the solver has calculated the theoretically optimal action frequencies using Counterfactual Regret Minimization (CFR) — an algorithm that converges on Nash equilibrium through billions of self-play iterations.
Example: on a K♠7♦2♣ flop in position against a raise, the solver might prescribe: call 45%, 3-bet 30%, fold 25%. The bot randomizes actions according to these frequencies.
The storage problem
A single flop solution (one preflop scenario, all turn and river runouts) can occupy 50 MB to 2+ GB depending on bet sizing tree complexity. There are 1,755 strategically distinct flops, each needing solutions for 15-25 common preflop scenarios. Full coverage requires an estimated 17-100+ terabytes. No consumer machine stores this.
Practical limitations
- Bet size mismatches: if the solution covers 33%, 67%, and 100% pot bets, but the opponent bets 52% — the bot must approximate
- Multiway pots: solvers struggle computationally with 3+ players. Most lookup bots use heads-up solutions even in multiway pots — a significant approximation
- Non-standard scenarios: limped pots, unusual stack depths, exotic bet lines — if it wasn’t pre-computed, the bot has no principled answer
More details in the article “GTO Strategy: Why the Bot Becomes Invulnerable”
Real-time GTO solving
Instead of pre-computed tables, the bot solves the current game situation in real time — computing Nash equilibrium during play. This eliminates storage problems and handles any bet size or scenario.
What it took at the research level
- Libratus (2017, Carnegie Mellon) — defeated top professionals in heads-up NLHE using a supercomputer with 600 compute nodes. Real-time endgame solving: 10-20 seconds per decision on multiple CPU cores
- Pluribus (2019, Carnegie Mellon / Facebook AI) — beat 6 professionals in 6-player NLHE. Blueprint computed on a 64-core server with 512 GB RAM over 8 days. Real-time search: 2 CPU cores, 128 GB RAM, 28 seconds per decision
- DeepStack (2017, University of Alberta) — combined real-time solving with neural value estimation on a single GPU, dramatically reducing computational requirements
Consumer hardware feasibility
Pluribus’s real-time component (2 cores, 28 seconds) sounds accessible, but the 128 GB RAM requirement exceeds typical consumer machines. With coarser abstractions (fewer bet sizes, simplified card groupings), real-time solving can fit in 16-32 GB RAM at 5-15 seconds per decision — but quality degrades proportionally. Full-fidelity real-time solving at Pluribus level on a home PC is not yet practical.
AI and neural networks
AI bots use machine learning models that evaluate game situations and select actions — not by following rules or looking up solutions, but by recognizing patterns learned from massive data. There are several sub-approaches:
Supervised learning
A neural network trained on databases of hands played by winning players. The model learns to imitate expert behavior: given a game state, it outputs the action distribution observed in successful play.
Limitation: can only be as good as the training data. Doesn’t understand why a play is correct — just copies patterns. Against novel situations, it has no principled fallback. This was the dominant approach in early academic poker AI (University of Alberta’s Loki system, late 1990s).
Reinforcement learning / Self-play
The approach behind the biggest poker AI breakthroughs. The AI plays against itself billions of times, tracking counterfactual regret — how much better each alternative action would have been at every decision point. Over time, the strategy converges to Nash equilibrium without any human training data. The AI discovers optimal play from scratch.
This is how Cepheus (solved Limit Hold’em, 2015), Libratus, and Pluribus were built. Training is computationally expensive (millions of core-hours), but the resulting model is mathematically grounded.
Deep learning + game theory
The cutting edge: neural networks that approximate CFR-based solutions with dramatically less computation. Instead of solving from scratch each time, a trained network instantly estimates the value of any game state — enabling real-time play on modest hardware.
Key examples: DeepStack (neural value networks + limited CFR search on a GPU), ReBeL (Facebook AI, 2020 — recursive belief-based learning), Student of Games (DeepMind, 2023 — unified approach for both perfect and imperfect information games). The trend: theoretical soundness of game theory with the speed of neural networks.
Hybrid (the modern standard)
No single “pure” approach works optimally on its own. Pure GTO leaves money on the table against weak players. Pure exploitation is vulnerable to counter-exploitation. Pure AI without a poker-theoretic foundation is an expensive experiment. The most effective modern bots combine multiple approaches:
- GTO baseline — a theoretically sound default strategy, protecting against exploitation by strong opponents
- AI evaluation — neural networks that assess any game state, including those not covered by pre-computed solutions
- Exploitative adjustments — as data accumulates on a specific opponent (typically 200-300+ hands), the bot identifies their weaknesses and deviates from baseline to maximize profit
Typical progression at the table:
- First ~100 hands against a new opponent — GTO baseline, safe and unexploitable
- 100-300 hands — soft exploitative adjustments based on emerging patterns
- 300+ hands — full adaptation to the opponent’s specific tendencies
This is the current industry standard for serious poker bots. PokerBotAI uses precisely this approach.
Exploitative play: a layer, not a type
An important clarification: exploitation is not a separate category of bot. It’s a strategic layer that enhances any base approach. A rule-based bot can have simple exploitation rules (“if opponent folds to 3-bet > 65%, 3-bet wider”). An AI bot can use neural network-driven opponent modeling for sophisticated exploitation. The effectiveness depends on the quality of the base strategy and the data available.
On its own, without a sound GTO or AI base, pure exploit logic is vulnerable and unstable — a smart opponent can counter-exploit predictable adjustments. Exploitation works best as an overlay on a solid foundation.
LLMs in poker: the 2025 experiment
In October 2025, PokerBattle.ai hosted the first-ever poker tournament exclusively for large language models — nine models including OpenAI o3, Claude, Grok, and Gemini competed over 3,800 hands of No-Limit Hold’em. OpenAI o3 won the tournament.
However, detailed analysis (by Octopi Poker and others) revealed critical weaknesses across all LLMs: near-absence of genuine bluffing, poor range construction, inability to randomize actions for balanced play, and recurring factual errors — including misidentifying their own position, confusing hand combinations, and miscalculating equity. The consensus: even the best LLMs couldn’t match an average human poker player.
In February 2026, Google DeepMind added poker to its Kaggle Game Arena benchmarks, further confirming that general-purpose language models are not competitive poker engines.
LLMs lack specialized training on billions of poker hands, lack real-time opponent modeling, and cannot maintain the mixed strategies that competitive poker requires. However, they can be useful as auxiliary tools — for post-session hand analysis, strategy discussion, and reviewing decision logic.
Comparison: decision-making approaches
| Criterion | Rule-based | Solver (GTO) | AI | Hybrid |
|---|---|---|---|---|
| Adaptability | None | None | Yes | Yes |
| Defense against exploitation | Weak | Maximum | High | High |
| Profit vs weak players | Medium | Low | High | Maximum |
| Profit vs strong players | Negative | Stable | Stable | Stable |
| Detectability | High | Medium | Low | Low |
| Hardware requirements | Low | High (storage) | Servers | Servers |
| Development complexity | Low | Medium | Very high | Very high |
Which type to choose
For learning poker
Bots are not the primary tool for learning — trainers, solvers, and coaching are better suited for that. However, an AI bot in Manual Mode can be a powerful supplement: you see the neural network’s decisions in real time and compare them with your own thinking at real tables against real opponents.
For earning at micro stakes
Hybrid. At low stakes, there are many weak players whose mistakes need to be exploited. Pure GTO leaves money on the table. A hybrid bot adapts to each opponent while maintaining a safe baseline that protects against stronger players.
For mid and high stakes
Hybrid or AI with strong GTO foundations. You need defense against strong regulars combined with the ability to exploit mistakes when they appear. Pure exploitation is dangerous — opponents at higher stakes can counter-exploit predictable adjustments.
For club protection
Understanding how different bot types work across all four dimensions — data acquisition methods, input techniques, computation architectures, and strategic patterns — is essential for recognizing and countering bot threats in your club.
More details in the article “How to Protect Your Club from Bots”
Common misconceptions
“A GTO bot is unbeatable”
GTO ensures unexploitability — no opponent can find a winning counter-strategy. But unexploitability is not the same as maximum profit. Against weak players, a GTO bot earns less than a hybrid that exploits their mistakes. And pure GTO bots virtually don’t exist in practice — the computational requirements for full real-time GTO solving exceed current consumer hardware.
“AI is just a marketing term”
It depends on the seller. Real AI bots genuinely use neural networks trained on millions of hands through self-play. But some vendors slap the “AI” label on ordinary rule-based bots. The difference: ask about the architecture, training data, and adaptation mechanism. Vague answers usually mean it’s not real AI.
“All bots use screen scraping”
Screen scraping (template-based or AI-based) is the most common and accessible method, but far from the only one. Traffic interception, memory reading, and direct protocol emulation all exist in the wild. Each has different stealth profiles and vulnerability characteristics.
“Hardware input makes a bot undetectable”
Hardware input emulation (Arduino, KMBox) eliminates software-level detection flags, but server-side behavioral analysis works regardless of how the bot clicks. Timing patterns, bet sizing consistency, session length, win rate — all analyzed server-side. Stealth requires humanization across all dimensions, not just the input method.
“Profile bots are obsolete”
Not entirely. Against a mass of weak players, even a rule-based bot with a reasonable profile can show short-term profit. They’re fine for exploring how poker bots work and experimenting with different strategies. But for sustained earning against adaptive opponents and improving anti-bot systems — they’re outclassed by AI and hybrid approaches.
What’s inside PokerBotAI
The PokerBotAI system is a hybrid AI bot with a three-component architecture:
- Hand History database — 300+ million real hands from poker rooms dating back to the 2000s, plus 7+ billion synthetic and solver data points
- Neural network — trained on this data to evaluate the EV of each action in real time
- Expert algorithms — a GTO base for defense against exploitation + an exploit module for adapting to specific opponents’ patterns
Architecture: Brain + Clicker. The Clicker runs on your device — AI-based screen recognition, humanized input execution, handling of pop-ups and UI quirks. The Brain runs on dedicated server infrastructure — neural network inference, opponent database lookups, strategy computation. Decisions are computed in milliseconds.
The bot doesn’t play fixed lines. It calculates the EV of each action considering all available information and selects the optimal decision. For each room, behavior is individually tuned — accounting for platform specifics, security systems, and interface nuances. The bot’s actions mimic human behavior: timing randomization, natural interaction patterns, decision variability.
Supported formats: NLH (No-Limit Hold’em), PLO4, PLO5, PLO6, and OFC (Open Face Chinese Poker). Broad format support is one of the key advantages — many competitors are limited to NLH only.
Two operating modes:
- Auto Mode — the bot plays entirely on its own. Set the parameters (stakes, tables, buy-in, stop-loss, timings) and launch. Ideal for scaling.
- Manual Mode — the bot provides hints, you make the final decisions. Ideal for learning, controlling play, and warming up new accounts.
More details in the article “What Is a Poker Bot and Why It Matters in 2026”
Key takeaways
- Poker bots differ across four dimensions — how they read the game, interact with the app, compute decisions, and choose strategy. Evaluating only one dimension gives an incomplete picture.
- Rule-based — simple, cheap, predictable. The technology of the past. Suitable for exploration, not for serious earning.
- Solver-based (GTO) — theoretically sound defense, but limited by storage constraints, coverage gaps, and inability to exploit weak players.
- AI / Neural networks — real adaptation through self-play. Includes CFR-based systems (Libratus, Pluribus) and neural network-based (DeepStack, ReBeL). No lookup tables — evaluates any game state dynamically.
- Hybrid — the current standard. GTO foundation + AI evaluation + exploitative adjustments = the best balance of defense and profit maximization.
- Exploitation is a strategic layer, not a bot type — it enhances any base approach but is vulnerable on its own.
- Detection is multidimensional: poker rooms analyze input patterns, screen interaction, network behavior, and — most importantly — playing patterns and decision statistics.
The bot type determines the ceiling of your results. A rule-based bot won’t become more profitable from profile tweaking. A hybrid AI bot will keep learning and adapting.
Next step
Want to see how a hybrid AI bot works in practice?
Try PokerBotAI for free — message @PokerBotAI_ShopBot on Telegram and request a trial.
Read on: “Bot vs RTA vs Solver vs Trainer” — understanding the full landscape of poker software
Understand the math: “EV and Equity: Why the Bot Doesn’t Care About Luck”