top of page

Does AI accentuate investor biases?

  • Writer: Robin Powell
    Robin Powell
  • 1 day ago
  • 7 min read


Investors are turning to AI hoping for a calmer, more rational view of the market. But two recent studies suggest these tools don't neutralise investor biases — they absorb them. And the more sophisticated the model, the deeper the problem may run.



Think about the last time autocorrect confidently mangled a message you were trying to send. It was supposed to fix your typos. Instead, it studied your habits so carefully that it now reproduces your particular mistakes with complete conviction. You didn't get a proofreader. You got a very attentive imitator.


Something similar may be happening with AI investing tools. The stakes, though, are considerably higher than a garbled text message.


The appeal is obvious. These tools are fast, tireless and, crucially, they feel objective. When emotion is clouding your judgment, when the market is falling and every instinct is screaming at you to do something, the idea of a dispassionate digital analyst cutting through the noise is genuinely attractive. The hope, in short, is that AI will do what we can't: set investor biases aside and think clearly.


"The hope is that AI will do what we can't: set investor biases aside and think clearly."

But what if the tool isn't dispassionate at all? What if it learned everything it knows from the same flawed, skewed, emotionally charged sources that shaped those biases in the first place?



Millions of investors are already using AI to make financial decisions


This isn't a future trend. It's already happening at every level of the market.


Professional fund managers use AI to summarise company filings and earnings calls. Analysts feed it competitor data and ask for preliminary recommendations. Retail investors type questions into ChatGPT about whether to hold or sell, whether a particular fund is worth the fees, whether now is a good time to add to their portfolio.


The CFA Institute has noted that large language models now influence how portfolio managers summarise information, generate ideas and frame trade decisions. That's a significant shift in how financial judgment gets formed, even when a human still makes the final call.


Behind almost all of this sits one shared assumption: that AI is more objective than we are. That it doesn't panic. That it doesn't anchor on yesterday's price or catastrophise about a bad week. That it processes evidence cleanly, without the emotional interference that costs ordinary investors so much money.


It's a reasonable assumption. It's also, the emerging research suggests, largely wrong.



The research verdict: AI doesn't correct investor biases, it reflects them


Two recent studies have started to unpick what's going wrong, and the findings sit uncomfortably with the assumption of AI objectivity.


The first comes from Toghrul Aghbabali, writing for the CFA Institute's Enterprising Investor. His concern is what he calls attention bias: a systematic skew in how AI tools perceive the investment universe. The problem starts with training data. AI models learn from text: news articles, analyst reports, earnings commentary, social media discussion. But the financial world doesn't generate that text evenly. Large companies, heavily traded stocks and high-profile technology firms dominate the conversation. Smaller, less-covered businesses barely register.


So AI investing tools don't survey the market neutrally. They see what the financial media sees, which is to say, they see what's already popular. When prompted to generate investment ideas or issue recommendations, these models show systematic preferences for large, liquid, well-known names. Not because the fundamentals justify it, but because the attention does. Portfolios shaped by AI outputs may quietly tilt toward whatever is already crowded, with no one consciously making that choice.


That's troubling enough. But a second study raises a harder question.


Bini, Cong, Huang and Jin tested 12 AI models across four major families: ChatGPT, Claude, Gemini and Llama. They looked for the kinds of behavioural biases that Kahneman and Tversky documented in humans decades ago: loss aversion, framing effects, probability distortion. This is a working paper, not yet peer-reviewed, so treat the findings as early-stage evidence. But what they found is striking.


For preference-based tasks, decisions involving risk and trade-offs, more advanced models didn't outperform their predecessors. They performed worse. The most capable versions of each AI family showed stronger loss aversion and more pronounced framing effects than the older, simpler models. As these tools became more sophisticated, they became more human in exactly the ways we'd rather they weren't.


"As these tools became more sophisticated, they became more human in exactly the ways we'd rather they weren't."



Bar charts showing the proportion of rational, human-like and other responses from four advanced AI model families — GPT, Claude, Gemini and Llama — across two types of task. In preference-based tasks (Panel A), involving decisions about risk and trade-offs, human-like responses dominate across all four models. In belief-based tasks (Panel B), involving statistical reasoning, rational responses dominate. The contrast between the two panels illustrates the paper's central finding: the same AI models that reason clearly about statistics make systematically irrational choices when faced with the kind of judgment calls that investors encounter most. Source: Bini, Cong, Huang & Jin (2026), NBER Working Paper No. 34745
For preference-based tasks, decisions involving risk and trade-offs, advanced AI models respond in human-like ways far more often than not. For belief-based statistical tasks, the same models are largely rational. Source: Bini, Cong, Huang & Jin (2026)


For preference-based tasks, decisions involving risk and trade-offs, advanced AI models respond in human-like ways far more often than not. For belief-based statistical tasks, the same models are largely rational. Source: Bini, Cong, Huang & Jin (2026), NBER Working Paper No. 34745.


There's a further wrinkle. The researchers found meaningful differences between AI families. Gemini showed more human-like irrationality in preference tasks than ChatGPT. Llama struggled more with statistical reasoning. The biases vary by model, by task type and by the version you happen to be using, which means investors may be exposed to different blind spots depending on which tool they've chosen, with no obvious way of knowing it.

Taken together, the objectivity investors are seeking from AI may be largely illusory.



Why the smarter the AI, the more human its irrationality


To understand why this happens, it helps to know something about how modern AI models are built.


These tools don't just learn from text. They're also trained on human feedback: real people rating responses, flagging which answers feel more helpful or accurate. This process, known as reinforcement learning from human feedback (RLHF), is what separates today's conversational AI from its clunkier predecessors. It's why these models feel so fluent and so attuned to what you're asking.


But there's a catch. When a model learns to align with human preferences, it doesn't just absorb our wisdom. It absorbs our habits, our shortcuts and our errors too. The more feedback it receives, the more faithfully it mirrors human thinking, including the parts that lead us to make poor financial decisions.


Back to autocorrect. Early versions were crude and often wrong. Later versions, trained on vastly more data, became eerily good at anticipating what you meant to type. They also got better at reproducing your specific quirks and recurring mistakes. The improvement and the problem came from the same source. The same dynamic appears to be playing out in AI investing.


One important caveat: statistical reasoning is different. Working out probabilities, identifying base rates, processing factual data — more advanced models generally do get more rational there. The bias is concentrated in decisions involving risk, trade-offs and judgment under uncertainty. Which is to say, precisely the situations where investors most need clear thinking.



Passive investing already solves the problem AI creates


There's a certain irony here for evidence-based investors. The bias that AI tools struggle to overcome, favouring what's visible, popular and heavily discussed, is exactly the bias that index funds were designed to eliminate.


A global tracker doesn't care which stocks dominate the financial news. It doesn't weight holdings by analyst coverage, social media chatter or which sector is attracting attention. It buys the market proportionally: the obscure alongside the celebrated, the quiet compounders alongside the headline names. That discipline isn't incidental. It's the whole point.


AI selection tools work in the opposite direction. They're drawn toward what the financial internet talks about most. And the financial internet is not a neutral record of investment opportunity.


That said, passive investors aren't fully insulated. The behaviour gap, the tendency to sell during falls and buy during rises, remains costly, and AI tools may make it worse. Ask an AI whether to hold your index fund through a sharp decline, and you may get back an answer shaped by the same anxiety that's already clouding your thinking.


AI works best as a starting point, not a verdict.



How to use AI without handing it the wheel


Used carefully, AI tools can genuinely improve how you research and organise investment information. The question is where to draw the line between useful input and outsourced judgment.


A few practical steps worth adopting.


Notice when outputs cluster. If you ask an AI tool to generate investment ideas and it repeatedly returns to the same large-cap names, dominant sectors or heavily covered stocks, that clustering is probably a fingerprint of attention bias, not a signal of opportunity. Treat it as a prompt to look elsewhere.


Use AI for gathering, not deciding. These tools are good at summarising filings, organising information and flagging questions worth asking. They're less reliable when you ask them to make the call. Feed AI into your thinking, not the other way around.


Ask it to argue the other side. If you're inclined to sell after a market fall, ask AI to make the strongest case for staying invested instead. Forcing the counter-argument is one of the more effective ways to use these tools against your confirmation bias rather than alongside it.


Try a simple prompt adjustment. Bini and colleagues found that asking a model to respond "as a rational investor" modestly improved the quality of its reasoning. It's a small nudge, but it appears to help. What doesn't help — and can actively backfire — is loading the prompt with detailed instructions about cognitive biases. More information, in this context, produces more confusion rather than clearer thinking.



The tool that learned from you can't correct investor biases


"The advantage won't go to investors who avoid AI, or to those who trust it completely. It'll go to those who understand what it actually is."

We never really solved the autocorrect problem. We just learned to check it.

AI investing tools will probably follow a similar path. They're genuinely useful: fast, capable and far broader in reach than any individual investor. But useful isn't the same as objective. These tools were trained on human behaviour, shaped by human feedback and fed on a financial internet that reflects human obsessions. They can't stand outside that.

Neither, of course, can we.


The advantage won't go to investors who avoid AI, or to those who trust it completely. It'll go to those who understand what it actually is: a capable, occasionally brilliant, deeply human tool that works best when someone clear-eyed is still in charge.


That someone is you. Knowing the tool's limits is most of the battle.



Resources


Aghbabali, T. (2026, February 18). Attention bias in AI-driven investing. CFA Institute Enterprising Investor.


Bini, P., Cong, L. W., Huang, X., & Jin, L. J. (2026). Behavioral economics of AI: LLM biases and corrections (NBER Working Paper No. 34745). National Bureau of Economic Research.




One logical next step


If this has made you think about whether your current approach is actually working, our Find an adviser directory is a good place to start. Everyone listed has publicly committed to low-cost, globally diversified investing — the kind of approach that reinforces what you've read here, rather than quietly undoing it.


Stay connected: YouTube | LinkedIn | X 





Recently on TEBI







Regis Media Logo

The Evidence-Based Investor is produced by Regis Media, a specialist provider of content marketing for evidence-based advisers.
Contact Regis Media

  • LinkedIn
  • X
  • Facebook
  • Instagram
  • Youtube
  • TikTok

All content is for informational purposes only. We make no representations as to the accuracy, completeness, suitability or validity of any information on this site and will not be liable for any errors or omissions or any damages arising from its display or use.

Full disclaimer.

© 2026 The Evidence-Based Investor. All rights reserved.

bottom of page