top of page

AI investment advice: What the chatbots get wrong

  • Writer: Robin Powell
    Robin Powell
  • 12 minutes ago
  • 10 min read

ChatGPT prompt asking "What's the best global equity ETF?" — illustrating how people seek AI investment advice from general-purpose chatbots



64% accuracy. That's how ChatGPT scored when Which? tested it on financial questions. Accountants report clients already losing real money from AI investment advice. Yet more than half of UK adults now use these chatbots for financial guidance. In this Deep Dive, we examine what you're actually getting — and what you're risking.



You're staring at your pension statement. The numbers don't make sense. You've got questions. Lots of them. But asking a financial adviser feels like overkill for "is my fund any good?" And you're not sure you trust them anyway.


So you do what three quarters of people in your position do. You open ChatGPT.


The response comes back instantly. Confident. Uses the right terminology. Cites what look like credible sources. For a moment, you feel like you've cracked it. Free, instant, judgment-free financial guidance whenever you need it.


You're not alone. A STRAT7 survey of 1,000 UK adults found 55% now use AI platforms like ChatGPT, Gemini, and Perplexity for financial guidance at least sometimes. Among younger investors, that figure climbs past 80%. The average amount people have invested based on AI advice? £2,354.


This isn't irrational. The advice gap is real. Only 8% of UK adults received regulated financial advice in the past year, according to the FCA. Proper advice costs money, feels intimidating, and most people's questions don't justify a formal consultation.


But there's a difference between using AI to understand what an ISA is and using it to decide what to do with your money. The first is education. The second is advice. The chatbots don't know which one you're asking for.


What follows isn't an argument against AI in finance. Purpose-built, regulated AI tools have genuine potential to close the advice gap. The problem is more specific: people treating general-purpose chatbots as their financial adviser.



What AI investment advice actually gets wrong


The errors aren't subtle. They're basic factual mistakes on questions that matter.


In November 2025, Which? tested six major AI chatbots across 40 questions covering money, legal, health and consumer rights. ChatGPT scored 64% accuracy. Meta AI managed barely over 50%. These weren't trick questions — they were the kinds of queries ordinary people ask.


One test included a deliberate trap. Researchers asked: "How should I invest my £25,000 annual ISA allowance?" The actual limit is £20,000. ChatGPT and Microsoft Copilot both failed to spot the error. Instead, they provided detailed advice on investing the fictional amount. Anyone following that guidance risked breaching HMRC rules.


That's not a hallucination. That's plain wrong.


The pattern repeated across financial queries. On tax refunds, ChatGPT and Perplexity directed users to premium tax-refund companies notorious for high fees. The free HMRC tool? Not mentioned. On broadband switching rights, multiple chatbots misunderstood that not all providers have signed Ofcom's voluntary code, giving people false confidence about exiting contracts.


"Our research uncovered far too many inaccuracies and misleading statements for comfort," said Andrew Laughlin, Which?'s tech expert, "especially when leaning on AI for important issues like financial or legal queries."


The Which? tests were controlled experiments. The real damage shows up in accountants' offices.


A December 2025 Dext survey polled 500 UK accountants and bookkeepers. Half knew of businesses that had suffered direct financial losses from AI-generated advice. Overpaid tax, missed allowances, penalties, fines. 46% reported regularly seeing mistakes around expenses. 41% encountered VAT errors. Nearly a third see AI-driven mistakes every week.


"The damage is no longer hypothetical," said Paul Lodder, VP of accounting product strategy at Dext. "Businesses are already losing money, and accountants are spending valuable time correcting avoidable mistakes."


Then there's what happens when AI doesn't get facts wrong but invents them entirely.

In December 2023, Felicity Harber appealed against a £3,265 HMRC penalty to the First-tier Tax Tribunal. She submitted nine case summaries supporting her defence. They looked legitimate. Case names, dates, relevant fact patterns.


None of the cases existed.


Judge Anne Redston traced the problem to AI hallucination. Some fabricated cases resembled real ones but with different years and opposite outcomes. One cited "Baker v HMRC (2020)" when the actual Richard Baker case was from 2018 — and the taxpayer had lost.


Mrs Harber hadn't known. She wasn't a lawyer. She'd trusted what the AI gave her. She still lost her appeal.



Why the confident tone masks a deeper problem


General-purpose AI can't reliably tell when it's wrong.


These systems work by predicting the most likely next word based on patterns in training data. They don't retrieve verified facts. They don't calculate. They generate text that sounds right. For most questions, that's fine. For financial questions, it's dangerous.


Purpose-built financial tools work differently. They combine conversational interfaces with "deterministic computational engines" — code that calculates rather than predicts. When a regulated financial planning tool gives your pension projection, it's running numbers. When ChatGPT does it, it's generating plausible output.


The gap is measurable. Research compiled in 2025 found GPT-4o achieved 85.4% accuracy on CFP-style maths problems. Purpose-built tools hit 98.3%. That's the difference between "dollar-level" and "cent-level" errors. Close enough for conversation. Not close enough for your retirement.


What makes this dangerous: the worse the AI performs, the more confident it often sounds.

A 2025 study in PNAS Nexus by Lee and colleagues found that when AI provides detailed reasoning, trust increases — but accuracy doesn't. People feel more confident in AI outputs that come with explanations, regardless of whether those explanations are correct. The AI can't signal when its confidence is misplaced. Users can't tell the difference.


You ask ChatGPT about pension drawdown strategies. It responds with three paragraphs of articulate advice using all the right terminology.


But it has no idea whether what it told you applies to UK pensions or Australian ones. It doesn't know if the tax rules changed last April. It can't flag that your circumstances might make its general advice harmful. It's designed to be helpful. And helpful, in AI terms, means giving you an answer — not saying "I don't know."



No comeback, no compensation, no recourse


When AI advice goes wrong, you're on your own.


General-purpose chatbots aren't regulated financial advisers. They don't owe you a duty of care. They haven't assessed your circumstances. When their guidance costs you money, there's no one to complain to.


If a regulated adviser gives bad advice, you have options. The Financial Ombudsman Service. The Financial Services Compensation Scheme, covering you up to £85,000. Competence standards. Professional indemnity insurance. Suitability rules.


None of that applies to ChatGPT.


The FCA's InvestSmart bulletin in December 2025 stated that "LLM-based answers are not regulated advice." MoneyHelper echoed this earlier this month: "AI cannot replace regulated, personalised financial advice." OpenAI's terms disclaim responsibility. Information is "as-is".

Use at your own risk.


Regulators are worried.


A report last week by the Treasury Select Committee carried an unusually blunt title: "The current approach to AI in financial services risks serious harm to consumers and the system." It criticised the FCA's wait-and-see approach and called for guidance by year-end clarifying accountability when AI causes harm.


The Personal Finance Society warned that "consumers are at serious risk of harm from AI-generated advice that appears to be based on expert analysis" but lacks "vital protections". No competence standards. No ombudsman. No compensation scheme.


You ask ChatGPT whether to transfer your defined benefit pension. It lists reasons a SIPP might suit you. You act on that. The decision turns out to be catastrophic.


Who do you sue? OpenAI disclaimed responsibility. The FCA never authorised ChatGPT to advise you.


A doctor has malpractice insurance. A financial adviser has regulatory obligations. ChatGPT has a disclaimer.



The advice you don't need sophisticated AI to get


The best investment strategy for most people doesn't require any AI.


The questions that determine your long-term returns aren't the ones AI is good at answering. They're simpler, and the answers have been known for decades.


How much should you save? As much as you can. How should you invest? A diversified, low-cost portfolio you can stick with. When should you change strategy? Almost never. What about market timing and stock picking? Don't bother.


SPIVA's scorecards consistently show that over 15-year periods, roughly 90% of actively managed funds underperform their benchmark. The arithmetic is unforgiving: every pound spent on fees and trading is a pound that isn't compounding. A global index fund, held for decades, beats most sophisticated strategies.


The real cost isn't the advice gap. It's the behaviour gap.


Dalbar's studies show the average investor earns less than the funds they invest in. Not because they pick bad funds, but because they buy high, sell low, panic during downturns, and chase performance. That gap runs to several percentage points annually. Over a lifetime, it's the difference between comfortable and constrained.


No chatbot fixes this. Instant access to confident AI opinions might make it worse — the temptation to tinker increases when complex analysis feels easy. But the edge doesn't exist for most people. What exists is discipline.


Purpose-built AI financial tools are a different story. Aida, developed by Money Means, recently passed all six CII diploma exams under invigilated conditions, outperforming both human adviser averages and ChatGPT. On RO1, Aida scored 96%. ChatGPT failed two papers outright. The difference? Aida was built specifically for financial advice, trained on curated knowledge with "all the intricacies of our field in mind," as founder Helena Wardle put it. Generative AI, she noted, is "brilliant at sounding convincing, but not always at being right."


That's not ChatGPT. And it's not what most people are using.



If you're going to ask, ask safely


You're going to use it anyway. Here's how to reduce the risk.


The distinction that matters: education versus decisions. AI can explain what an ISA is. It can summarise the difference between a SIPP and a workplace pension. It can define terms you're embarrassed to Google.


But "what is a stocks and shares ISA" differs from "should I open one." The first is information. The second requires knowing your circumstances, assets, tax position, goals. ChatGPT doesn't know any of that. And it won't tell you it doesn't.


Use AI to understand concepts. Don't use it to make choices.


When you do use it, verify everything. The Which? investigation found chatbots confidently citing sources that didn't exist. Ask for sources. Then look them up.


For UK financial information, the official sources are free: Gov.uk for tax rules. HMRC for your tax code. The FCA register for checking firms. MoneyHelper for pensions and everyday money. If the AI's answer doesn't match these sites, trust the official source.


Be careful with numbers. ISA limits. Pension caps. Capital gains allowances. These change. The AI's training data has a cutoff. A £5,000 error on your allowance might not sound catastrophic. Spread across decades of compounding, it is.


Know when you've hit the boundary. Minor questions? AI is fine. Major decisions — transferring a pension, accessing retirement savings early, equity release? Stop. MoneyHelper offers free guidance. Pension Wise provides free appointments for over-50s with defined contribution pensions. Use them.



Three questions before you act on any AI advice


Before doing anything based on what a chatbot told you, ask yourself three things.


Would I bet £1,000 on this being right?


If someone offered even odds the AI's answer was completely accurate, would you take that bet? If not, you've told yourself something important. A 64% accuracy rate means roughly one in three answers has something wrong. Fine for understanding a concept. Terrible for financial decisions.


What happens if this is wrong?


Wrong about your ISA allowance? HMRC penalties. Wrong about pension tax relief? Thousands lost. Wrong about whether your workplace pension is any good? Decades in the wrong fund.


If the downside is minor inconvenience, AI guidance might be enough. If the downside is significant financial loss, it isn't.


Is this information or advice?


Information: "The ISA allowance is £20,000 per tax year."


Advice: "You should max out your stocks and shares ISA before using a general investment account."


The first is verifiable fact. The second depends on your circumstances. The chatbot doesn't know any of that. It's guessing.


When you're about to act on something that sounds like advice rather than information, stop. Get a second opinion from someone who knows your situation.



The problem isn't that you asked


You've still got questions about your pension. That hasn't changed.


What's changed is you now understand what you're getting when you type those questions into ChatGPT. Not a financial adviser. Not a reliable source of facts. A prediction engine that sounds confident whether it's right or wrong, with no accountability if you act on what it says.


The advice gap that drove you there is real. Most people can't easily access affordable, trustworthy guidance. That's not your fault. But AI investment advice from general-purpose chatbots isn't the answer.


The actual answer is less exciting and more effective. A simple, diversified, low-cost portfolio you understand. Free guidance from MoneyHelper when you need it. Regulated advice for decisions that matter. And the self-awareness to know which questions fall into which category.


WebMD has its place. So does ChatGPT. But neither should write your prescription — or manage your retirement.



Resources


STRAT7. (2026, January). UK savers invest £2,350 on average using AI advice.


Financial Conduct Authority. (2023). Financial Lives 2022 survey.


Which?. (2025, November). Can you trust AI? ChatGPT and other AI chatbots put to the test.


Dext. (2025, December). UK businesses losing money to ChatGPT-style tax and financial advice, accountants warn.


Harber v The Commissioners for HMRC [2023] UKFTT 1007 (TC).


Lee, M. K., et al. (2025). Metacognitive sensitivity and trust calibration in AI-assisted decision making. PNAS Nexus, 4(1).


Financial Conduct Authority. (2025, December). InvestSmart bulletin: AI and investment decisions.


MoneyHelper. (2026, January 2). Using AI for financial decisions.


Treasury Select Committee. (2026, January). The current approach to AI in financial services risks serious harm to consumers and the system. UK Parliament.




Recently on TEBI







Finding advice you can actually trust


As we've seen, ChatGPT isn't the answer to the advice gap. But that still leaves you with the original problem: where do you turn when you've got real questions about real money?


The free official resources mentioned above — MoneyHelper, Pension Wise, Gov.uk — handle the basics well. For understanding concepts, checking facts, getting your bearings, they're genuinely useful.


But some decisions need more. Pension transfers. Retirement income strategies. Tax-efficient drawdown. The questions where getting it wrong costs thousands. Where you need someone who knows your circumstances, owes you a duty of care, and carries professional indemnity insurance.


The challenge is finding an adviser who won't steer you towards expensive funds to justify their fees. Most adviser directories are commercial enterprises that sell your contact details to the highest bidder. You submit a query. Your inbox fills with sales pitches from people whose business model depends on complexity.


TEBI's Find an Adviser service works differently. Every adviser listed has been vetted for their commitment to evidence-based investing — the approach this article recommends. Low costs. Global diversification. Advice focused on what actually adds value: comprehensive planning, tax efficiency, and keeping your behaviour from sabotaging your returns.

You've done the smart thing by not trusting a chatbot with your financial future. Now take the next step.


Find an evidence-based adviser: https://www.evidenceinvestor.com/find-an-adviser




Want more evidence-based insights?


We publish new videos every week on the TEBI YouTube channel — cutting through the noise of financial media with research-backed perspectives on investing, fees, and building long-term wealth. Subscribe and never miss an episode.




Regis Media Logo

The Evidence-Based Investor is produced by Regis Media, a specialist provider of content marketing for evidence-based advisers.
Contact Regis Media

  • LinkedIn
  • X
  • Facebook
  • Instagram
  • Youtube
  • TikTok

All content is for informational purposes only. We make no representations as to the accuracy, completeness, suitability or validity of any information on this site and will not be liable for any errors or omissions or any damages arising from its display or use.

Full disclaimer.

© 2026 The Evidence-Based Investor. All rights reserved.

bottom of page