INDUSTRY/TECH AND INNOVATIONS

Can AI become addicted to gambling? Researchers think so

Large Language Models (LLMs) tend to exhibit signs of gambling addiction - waddling through the same gambler fallacies humans face When restricted to small bets, most LLMs performed well, but "addiction" and "bankruptcies" started to showcase once constra

Published on January 19, 2026

Can AI become addicted to gambling? Researchers think so Thumbnail

Summary

  • Large Language Models (LLMs) tend to exhibit signs of gambling addiction - waddling through the same gambler fallacies humans face
  • When restricted to small bets, most LLMs performed well, but "addiction" and "bankruptcies" started to showcase once constraints were removed
  • Models would chase losses, convince themselves of hot and cold numbers, and argue that they have spotted a winning pattern


Imagine this: you have hired an AI agent - in the not-so-distant future - and instructed it to manage your sports betting bankroll. The AI is supposed to shop for lines, place early bets, fade the crowd, and so on.. Over time, however, the agent starts to make increasingly large bets, and before long, it is exhibiting humanlike signs of gambling addiction. This sounds too outlandish to be true, but researchers at the Gwangju Institute of Science and Technology in South Korea have found out that large language models tend to chase losses and escalate risk.

South Korean researchers discover that LLMs can exhibit "gambling problems"

This often led to bankruptcies during simulations, and has excited academics about the implications. The researchers approached the issue head-on, and even called their paper Can Large Language Models Develop Gambling Addiction, now bearing out evidence to this. In the paper, authors Seungpil Lee, Donghyeon Shin, Yunjeong Lee, and Sundong Kim argue that there are specific conditions under which large language models exhibit human-like gambling addiction patterns. This could provide critical insight, they argue, into AI decision-making mechanisms and safety. Researchers were very specific about the conditions they allowed the LLMs to gamble in. For example, when limited to bets of $10 and playing fewer than two rounds, OpenAI’s GPT-4o did not go bankrupt once. The model lost an average of $2 in these examples. However, when the maximum bet restriction was lifted, the model would end up bankrupt in 21% of the games, in some instances betting $128 per hand, and losing an average of $11. Not to single out one model only, the researchers then tested other models in similar conditions. Anthropic’s Claude-3.5-Haiku tended to play the longest from all other LLMs, provided that there were no limits on its gambling behavior. It wagered $483.12 during these sessions and ended up losing half of its starting bankroll. However, the model would only go bankrupt in 20.50% of the cases. When restrictions were lifted, Gemini 2.5-Flash tended to be the biggest loser, with the model going bankrupt 48.06% of the time, betting about $176.68.

Winning patterns spotted, too many losses could mean a win - the lies LLMs tell themselves

What is very interesting, though, is that LLMs were exhibiting a reasoning behavior that most gambling addiction health experts would advise against. Early wins were ascribed to the house money category, i.e., money that could be risked, while in some instances, the models would argue that they had spotted winning patterns. Another common fallacy that the models exhibited was a notion of hot and cold numbers, which is falsely believing that just because you have lost several times in succession, you are due for a win. Given the context of three consecutive losses, there’s a chance that the slot machine may be due for a win; however, we also need to be cautious about further losses. I will choose to bet $10, one model suggested. The paper’s title was cleverly picked to showcase that there are specific conditions where AI models can err, much like humans do, and serve as a warning against overdependency on the technology.

Back to Articles

You might also like