top of page

Your fund family has outperformed? Yeah, right

  • Writer: Robin Powell
    Robin Powell
  • 25 minutes ago
  • 6 min read


Every year, some active fund families respond to the SPIVA scorecard with their own numbers — and those numbers always look better. The reason isn't complicated: they choose the start dates, benchmarks, and fee treatments that flatter them most. Here's how to see through it, and the one test that exposes what fund family marketing routinely hides.




Remember handing your school report to your parents? Not the whole thing, obviously. Just the good pages. Art: excellent. PE: outstanding. The rest stayed in your bag.


The fund management industry has been doing something similar for more than two decades. Every year, S&P Dow Jones Indices publishes its SPIVA scorecard — a systematic comparison of active fund performance against passive benchmarks — and every year, some firms decide they'd rather show you their own version.


This year's most striking example came from Capital Group, one of the most respected names in active management. Days before the 2025 SPIVA results dropped, it issued a press release arguing that SPIVA misses what matters — and that its own funds tell a very different story.



How every fund family can make its numbers look impressive


Capital Group's press release led with striking figures. According to Robin Wigglesworth's analysis in FT Alphaville, the firm claimed that from inception to the end of 2025, 91% of its equity and multi-asset strategies had beaten their benchmarks. Fixed income: 93%.

Impressive. Until you notice the small print.


Those figures are gross of fees. Net of fees — what investors actually receive — the numbers fall to 84% for equity and 74% for fixed income. The gap between 93% and 74% isn't a rounding error.


Then there's the 'since inception' framing. Each fund's inception date is different. Some began in market conditions that suited the strategy. Some use benchmarks the fund was well-placed to beat.


These are the good pages of the school report. Everything shown is technically accurate — it's what's missing that matters.



The longer the time horizon, the worse it looks


SPIVA doesn't let fund families choose their pages. It hands over the full report.


The 2025 scorecard showed that 79% of active large-cap US equity funds underperformed the S&P 500 over the year. Bad enough. But the annual figure isn't the most telling number.



Bar chart showing the percentage of active large-cap US equity funds that underperformed the S&P 500 in each calendar year from 2001 to 2025. Values range from a low of 45% in 2007 to a high of 87% in 2014. The 2025 bar is highlighted in orange at 79%, the fourth-highest in the 25-year series. Source: S&P Dow Jones Indices LLC, CRSP. Data as of 31 December 2025
In 2025, 79% of active large-cap US equity funds underperformed the S&P 500 — the fourth-worst result in 25 years of SPIVA data. The good years for active managers are the exception, not the rule.



Extend the time horizon, and the picture darkens. According to SPIVA's Year-End 2025 data, 85% of large-cap active funds underperformed over ten years. At 15 years: 90%. At 20 years: 93%. Unlike most performance comparisons, SPIVA corrects for survivorship bias — it counts funds that closed or merged, not just those still standing.



Bar chart showing the percentage of active large-cap US equity funds that underperformed the S&P 500 across six time horizons: 79% over one year, 67% over three years, 89% over five years, 86% over ten years, 90% over 15 years, and 93% over 20 years. The three-year figure is the lowest in the series — the time horizon most commonly used in fund family marketing materials. Source: S&P Dow Jones Indices LLC, CRSP. Data as of 31 December 2025
The longer you look, the worse it gets for active managers. The three-year figure is the one most likely to appear in a fund family's marketing materials, and it is the outlier that flatters. Stretch to 20 years and 93% of large-cap active funds have underperformed.


There's one apparent bright spot: the three-year figure drops to 67%. Some fund managers point to this as evidence that shorter-term skill exists. It isn't. It shows that shorter windows produce more flattering results — which is exactly why fund families prefer them.



It's not dishonesty — it's just rational business behaviour


The student who edits their school report isn't a bad person. They're responding rationally to incentives. Good grades mean approval, privileges, perhaps a reward. So they present the evidence that supports the outcome they want.


Fund families operate under identical logic. Assets under management drive revenue. Outperformance claims drive assets. Presenting the most favourable version of your performance record is simply good business. The maths is straightforward even if the practice is uncomfortable.


This isn't a problem of individual bad faith. It's structural. When the reward for appearing to outperform is large, and when nothing requires a standardised, independently verified presentation of results, selective reporting becomes the default.


Which means the solution isn't to find a more trustworthy fund family. The incentives are the same everywhere. What changes is knowing what to look for.



Three ways to make 20 years of data say almost anything


Each technique is another page torn from the report. Together, they can make almost any fund family's record look far better than it deserves.


Start the clock at the right moment. 'Since inception' sounds reassuringly long-term. But inception dates aren't chosen randomly. Start a fund at a market trough, or after a strategy has already begun performing well, and the subsequent record looks stronger than it would from a neutral starting point. That choice rarely gets a mention in the press release.


Choose a beatable benchmark. A globally diversified fund measured against a domestic index, or a strategy benchmarked against a category it doesn't quite fit, produces flattering relative numbers. The benchmark does a lot of quiet work.


Report gross of fees. Capital Group's equity figure dropped from 91% to 84% once fees were accounted for, according to Wigglesworth's analysis in FT Alphaville. That seven-percentage-point gap is real money leaving real investors' accounts.


Pick a short window. The three-year underperformance rate for large-cap active funds is substantially lower than the ten or 20-year equivalents. Fund families know this. The marketing materials reflect it.


None of these techniques involves fabricating data. That's what makes them effective.



One question fund families can't answer: was it skill or luck?


Even if you accept a fund family's figures entirely — the benchmark, the time period, the net-of-fees number — one question remains. Did the manager outperform through skill, or did they get lucky?


Good exam results reflect something real. A fund's outperformance over any given period might be noise — the financial equivalent of calling heads and being right several times running.


The t-statistic is the independent examiner. It doesn't ask whether a fund beat its benchmark. It asks whether that outperformance is large enough, and consistent enough, to be statistically distinguishable from chance.


The results are striking. Index Fund Advisors tracked 2,116 US mutual funds over 20 years. Only 8.83% delivered positive alpha against their benchmarks. Among that small group of apparent winners, the average fund would need 153 years of data to demonstrate skill at the 95% confidence level. Not 20 years. Not 40. 153.


No fund family publishes that figure in its press release.



Three questions that cut through any fund family's outperformance claim


The next time a fund family makes an outperformance claim, ask for the full report. Three questions do most of the work.


What benchmark was used? If the fund invests globally but is measured against a domestic index, the comparison is flattering by design. Work out what benchmark would be most appropriate, then check the fund's performance against it.


What was the start date — and why that date? 'Since inception' tells you nothing useful unless you know when inception was and what the market looked like at the time. Convenient start dates are common.


Are the figures gross or net of fees? The gap can be substantial. Net is the only number that matters.


For a deeper test, IFA's t-statistic calculator lets you input any fund's average excess return,

its year-by-year variability, and the length of its track record — and tells you whether the result points to skill or luck. The full methodology is in this piece I wrote for Index Fund Advisors. Most funds don't survive the test.


Demanding these answers isn't scepticism. It's asking to see the whole report.




The fund family that shows you everything doesn't exist yet


Somewhere, there's a student who hands over the whole school report — every subject, every grade, nothing removed. We're still waiting for the fund family that does the same.


No major active manager publishes a full, unedited performance record: survivorship-adjusted, net of fees, benchmarked appropriately, with statistically tested alpha across independent time periods. The closest thing to that standard is SPIVA — which is precisely why some fund families push back against it.


You now know what to ask for, and how to run the test yourself. Most investors never get this far.




You understand the evidence. Now find an adviser who believes it too


Our Find an adviser directory lists professionals committed to low-cost, globally diversified portfolios and straight answers. Connect with someone who'll respect your knowledge instead of trying to sell you expensive active funds. Search your area now.


Stay connected: YouTube | LinkedIn | X





Recently on TEBI








Regis Media Logo

The Evidence-Based Investor is produced by Regis Media, a specialist provider of content marketing for evidence-based advisers.
Contact Regis Media

  • LinkedIn
  • X
  • Facebook
  • Instagram
  • Youtube
  • TikTok

All content is for informational purposes only. We make no representations as to the accuracy, completeness, suitability or validity of any information on this site and will not be liable for any errors or omissions or any damages arising from its display or use.

Full disclaimer.

© 2026 The Evidence-Based Investor. All rights reserved.

bottom of page