Artificial intelligence is no longer a side project for gambling firms. It sets prices. It spots risk. It checks IDs. It even helps keep match-fixing out of sport. Some of this is quietly impressive. Some of it needs a stronger spine on ethics and explainability. As one sportsbook supplier put it, “the market-leading betting product of the future will be so complex that it must be automated” (Kambi’s Simon Noy).
Let me explain how that looks on the ground.
Bookmakers don’t just predict winners. They manage risk minute by minute. Today’s trading rooms run on data pipelines and machine learning that price thousands of micro-markets at once. Kambi says AI-driven pricing and trading now account for “more than a third of operator” revenue across its network in 2025, while models evaluate game state, stake size, and bettor behavior for each wager. In a public talk the firm also noted a surge in build-your-own bets and described why automation is necessary to control exposure across those combinations.
You can hear the tone change in the quotes. “By making these decisions on an individual bet level, the system can be much more flexible… ultimately driving both profitability and the end user UX,” Noy said. That is the trader’s voice, but it hints at the consumer angle too. Prices can stay open rather than being suspended every time something unclear happens on the field.
On the data rights side of the market, Genius Sports launched GeniusIQ in 2024, framing an AI platform that fuses tracking data, computer vision, and behavioral signals across the Premier League, NFL, NBA and more. Whatever you think about branding, the point is scale: official data plus machine learning equals faster markets and more tailored experiences.
Integrity: AI as a watchdog against match-fixing
There’s a quieter part of the story that matters for sport itself. Sportradar’s Integrity Services monitored more than 850,000 matches across 70 sports in 2024 and flagged 1,108 suspicious matches, a decline from prior years. That figure is not just a headline; it shows how automated bet-monitoring systems and operator data sharing can actually move the needle.
Sportradar says its AI models power UFDS, the system used by over 170 partners, and that operator account-level data now confirms most suspicious cases. In 2024, 55% of suspicious matches identified drew on operator data, a record year for collaboration. When people talk about “AI in betting,” this is one of the rare places where the technology’s public benefit is visible.
Genius Sports, meanwhile, tied its integrity work into a formal partnership with IBIA to share alerts globally. More data. Faster escalation. Fewer gaps between jurisdictions.
Safer gambling: early risk detection, with real-world interventions
This is where the industry tends to speak in careful tones. But there is evidence that algorithms can spot risk earlier than legacy rules. A peer-reviewed study in the Journal of Gambling Studies found that self-reported problem gambling can be predicted with high accuracy using account-based player data and machine learning. That doesn’t mean the model “knows” someone’s life; it means certain play patterns are statistically meaningful.
On the product side, Mindway AI describes its GameScanner as “fully-automated, early detection of at-risk and problem gambling,” and says domain experts help train and review the system. Claims on vendor pages are marketing, yes, but they also reflect what operators are buying at scale in 2025.
Large operators have built their own stacks, and they’re not shy about it. Entain’s ARC (Advanced Responsibility & Care) uses multiple AI models to assess risk and personalize interventions. “The programme is delivering unprecedented levels of change in protection,” said CEO Jette Nygaard-Andersen. Internal updates describe ARC identifying risks in behavior “so we can intervene before a problem develops.”
Do consumers see outcomes from all this? Sometimes, yes. The UK Gambling Commission has been evangelizing the GamProtect scheme, a cross-operator data-sharing tool for high-risk cases. In a 2024 keynote, the Commission noted that GamProtect “has already identified over 5,500 consumers.” That is a real number tied to tangible blocks and support, not just dashboards in a lab.
Kindred Group, which has long published a quarterly share of revenue from high-risk players, reported 2.7% in Q4 2024 and said 92.2% of detected customers showed improved behavior after interventions. The methodology invites debate, but sharing the KPI at all keeps pressure on results instead of assurances.
A fair caution: algorithms can nudge behavior in the other direction too. A 2025 open-access article reported that users receiving personalized bonuses or making early cash-out decisions tended to adjust stake sizes and frequency in systematic ways, raising ethical questions about reinforcement and autonomy. The point is balance. The same personalization engine that can send a reminder to slow down can also make it harder to do so if it’s tuned for conversion alone.
Identity, KYC, and payments: AI at the front door
If you’ve ever verified an account with a selfie and a driver’s license, you’ve met this layer. Providers like Jumio and others use computer vision and ML to catch fake IDs and speed up onboarding. One recent case study from Kaizen Gaming reported that automated checks reduced IBAN verification to under 4.5 minutes and that 60% of transactions are now automated. That matters during peak sports calendars when manual queues overflow.
Jumio also says it serves many of the largest European operators, tying age checks into AML controls. It’s marketing copy, but the direction is clear: AI-assisted KYC has become table stakes in regulated markets.
There’s a consumer upside here too. Flutter, for example, has explored AI-powered automation to cut KYC response times from hours to minutes, as part of a broader Play Well program to put safer-play tools in more hands. The company reports rising tool usage and a growing investment line for safer gambling tech.
Brick-and-mortar casinos: cameras, alerts, and watchlists
Walk into a busy casino and you’ll see people, lights, and movement. Behind that scene are systems knitting together video, access control, and watchlists. Case studies show face recognition used to identify self-excluded or banned individuals in real time. Christchurch Casino, for instance, installed Cognitec’s FaceVACS to “detect banned, trespassed and other persons of interest as they enter the casino.” The aim is quick alerts for surveillance staff and fewer manual misses.
Vendors also describe unified platforms that bring previously siloed tools into one console so operators aren’t “toggling between different interfaces” during incidents. That tiny operational detail shows why AI is attractive here: fewer clicks, faster triage, better audit trails.
Are there risks? Of course. Face recognition invites hard questions on consent, bias, and storage. Some regulators tolerate it within strict programs. Others are circling the issue. The most honest line is simple: if the system blocks a self-excluded person before they lose more money, that’s a win; if it misidentifies a guest, that’s harm. Keep the bar high.
Personalization and the fine line between helpful and pushy
Recommender systems are everywhere. Sportsbooks surface markets you tend to browse. Casino lobbies shuffle tiles toward your favorite themes. The upside is relevance. The downside is heat. A 2025 study on algorithmic personalization in online gambling found that personalized bonuses and early cash-out features coincided with shifts in stake size and frequency, suggesting feedback loops that can push behavior. The authors call for more transparency and guardrails.
On the positive side, there’s evidence that well-timed, personalized feedback can reduce risky play. Earlier work in Frontiers in Psychology showed responsible-play messages tailored to the individual could change time-on-device and amounts wagered. This is the gentler version of personalization. It treats the player like a person, not a wallet.
Industry messaging has started to reflect this framing. Flutter’s Play Well and Entain’s ARC both stress early identification and human-centered interactions. Playtech’s BetBuddy group has published on making “black box” models more understandable to risk teams and regulators, plus feature-importance research that shows which behaviors consistently predict harm.
Regulation meets reality: the “black box” problem
Regulators now speak the language of data, and they’re asking for more of it. In 2024, the UK Gambling Commission’s CEO Andrew Rhodes emphasized the role of collaboration and data in modern oversight and highlighted the GamProtect system as an example of industry coordination that actually helps people. “GamProtect has already identified over 5,500 consumers,” he said.
At the same time, the Commission has warned that operators can’t lean on tools they don’t understand. Trade press summarizing recent compliance findings reported cases where AI models were poorly configured or not understood by the teams using them. That is a fixable problem: document how your model works, know its limits, and test rigorously. But it’s a real one, and the spotlight is getting brighter.
Legal advisors are telling clients the same thing. Bird & Bird’s 2024 note on Applications of AI in the Gambling Industry walks through expected checks in Britain, including new vulnerability checks tied to deposit levels. It reads like a reminder that AI lives inside a rulebook, not outside it.
A quick reality check on face recognition and real-world casinos
It is tempting to say cameras solve everything. They don’t. But they can help. When a casino installs face recognition to spot self-excluded or banned visitors, the success metric isn’t how shiny the model is. It’s how many harmful visits were prevented, how many false positives were corrected fast, and how secure the data is. The Christchurch case study is one data point: automated alerts at the door for “banned, trespassed and other persons of interest.” That’s precise language and a concrete use case.
Surveillance vendors also talk about consolidating tools into one screen. That is not hype. In a busy control room, seconds count. A unified console with intelligent search can mean the difference between “we saw that too late” and “we handled it.”
What comes next: practical steps that respect people
If you work in this space, you already know the tension: personalization grows revenue, but the same mechanics can amplify risk. The way out is not complicated English, but it is hard work.
- Explain your models in plain language. If your CS team cannot describe why an account was flagged, fix that first. The regulator will ask, and so will the customer.
- Prove harm reduction, not just detection. Publish intervention impact, the way Kindred and others have. Even if the numbers move slowly, the act of reporting builds trust.
- Close the loop between KYC and RG. If your onboarding sees deepfake IDs or bonus-abuse rings, feed that insight into safer-play models and promotions. Kaizen’s automation gains are a reminder that faster can still be careful.
- Share integrity data. Carefully. The integrity wins came from operators sharing signals. Keep that muscle strong and keep humans in the loop for hard calls.
I know, lists can be preachy. But these four are practical and human. They also align with what regulators are actually saying on record.
One last question worth asking
What does “good AI” feel like to a person placing a bet or buying chips? It feels like clear limits that you chose. It feels like fast checks that do not make you repeat yourself. It feels like an alert that landed early, not late. It feels like odds that move for a reason you can follow. And when something goes wrong, it feels like a real person steps in.
The tech is clever. The standard should be kinder.
You know what? The conversation tends to swing between “AI will fix it” and “AI will ruin it.” Most people live in the middle. They want a fair game, clear rules, and help when they need it. The smartest gambling companies in 2025 are using AI to do exactly that, and they are leaving an evidence trail as they go. Keep the evidence coming. Keep the humans close. And keep the models humble enough to learn.