Does the January barometer really work?
- Robin Powell
- 1 hour ago
- 7 min read

Every January, the same prediction resurfaces: watch how the market moves this month, because the January barometer will tell you what's coming for the rest of the year. It's one of the most enduring ideas in investing. But is there anything to it?
The January barometer is one of investing's most persistent notions. The idea is simple: as January goes, so goes the year. If stocks rise in the first month, the thinking runs, they'll finish the year higher. If they fall, brace yourself for losses. It's a neat story — and one that financial media trots out every winter. But does it actually work?
The concept dates back to 1972, when Yale Hirsch first codified it in the Stock Trader's Almanac. Since then, it's become an annual ritual. Financial journalists dust it off every New Year like a favourite decoration. Commentators cite impressive-sounding accuracy figures, somewhere between 75% and 85%. Some traders swear by it.
Those numbers aren't invented. The January barometer has "worked" about three-quarters of the time historically. When January finishes up, the full year tends to finish up. When January falls, the year often disappoints.
So why the scepticism? Because a thermometer that reads "warm" on 75% of days sounds accurate. Until you realise you live somewhere that's warm 75% of the time anyway. The reading tells you nothing you didn't already know.
The question isn't whether the January barometer has been right. It's whether it's been right for the right reasons, or whether investors are staring at a broken instrument and mistaking coincidence for insight.
"A thermometer that reads 'warm' on 75% of days sounds accurate. Until you realise you live somewhere that's warm 75% of the time anyway."
Two effects, one confusing name
Before examining the evidence, we need to untangle a common confusion. There are two "January effects" in finance, and they're completely different claims.
The original January effect refers to a pattern where small-cap stocks tend to outperform in January. The usual explanation involves tax-loss harvesting: investors sell losers in December to crystallise losses for tax purposes, then buy back into small caps in the new year. Whether this still works is debatable, but it's a distinct phenomenon.
The January barometer is something else entirely. This is the idea that January's direction predicts the rest of the year. Positive January, positive year. Negative January, trouble ahead. It's a forecasting tool, not a trading pattern.
The academic literature sometimes calls the barometer the "Other January Effect", which only adds to the confusion.
This article focuses on the barometer. The predictive claim. Does watching January tell you anything useful about the next 11 months?
For a while, serious researchers thought it might.
The early evidence looked convincing
The January barometer isn't folklore. For a time, it had serious academic backing.
The most influential study came from Michael Cooper, John McConnell, and Alexei Ovtchinnikov in 2006. They examined US stock returns stretching back to 1825. That's 147 years of data. And they found something striking: January's direction did seem to predict what happened over the following 11 months.
The effect survived their attempts to kill it. They controlled for business cycles, macroeconomic variables, investor sentiment, even the presidential election cycle. January's predictive power held up. It worked for large caps and small caps, for value stocks and growth stocks.
The numbers were economically meaningful too. Average returns following a positive January were more than ten percentage points higher than returns following a negative one.
Brown and Luo found similar results in 2004, concluding that January provided better predictive signals than any other month over the period from 1941 to 2002.
If you'd stopped reading the research there, you might have been convinced. A century and a half of data. Controls for confounding variables. Meaningful effect sizes.
Then other researchers started asking awkward questions.
Then researchers asked harder questions
The January barometer's credibility collapsed once academics tested it properly.
The first blow came from extending the timeframe. Darrat, Li, and Chung repeated the analysis in 2013, but pushed the data back further and forward. When they examined 1926 to 2012 instead of the 1940 to 2003 window that earlier studies favoured, the effect vanished. What looked like a persistent phenomenon turned out to be a quirk of one particular period.
Worse, February and September showed similar predictive patterns. If January were genuinely special, it should stand alone. It didn't.
Then came the international evidence. Ben Marshall and Nuttawat Visaltanachoti tested the January barometer across 22 equity markets in 2008. It failed in every single one. Not most. All.
In a 2009 paper Bohl and Salm examined 14 to 19 countries and found the effect held up in just two or three. Their conclusion was blunt: the January barometer is "nothing more than a statistical artifact."
"The January barometer is 'nothing more than a statistical artifact.'"
The data-mining problem became impossible to ignore. In a 2008 paper Easton and Pinder tested all 12 calendar months across 38 countries. Eight of the remaining 11 months showed statistically significant effects in as many countries as January. January wasn't a uniquely powerful signal. Researchers had simply looked at it more often and told a better story about it.
And even if you still believed the pattern existed, it didn't help you make money. Marshall and Visaltanachoti's follow-up work showed that trading on the January barometer failed to beat a simple buy-and-hold strategy once you accounted for transaction costs and risk. A true edge survives contact with reality. This one didn't.
Our broken thermometer, tested in different climates around the world, turned out to be stuck.
Why the January barometer myth refuses to die
If the evidence is this weak, why does the January barometer keep appearing in financial commentary every year?
The base rate illusion explains most of it. Stock markets rise in roughly 70% to 75% of all years. Any "predictor" that mostly says "up" will be right most of the time, regardless of whether it's measuring anything real. The January barometer claims accuracy of 75% to 85%. But a coin that lands heads three-quarters of the time isn't predicting anything. It's reflecting the underlying odds.
This is why the broken thermometer analogy fits. The thermometer isn't forecasting warm weather. It's stuck on warm, and you happen to live somewhere warm.
Narrative appeal does the rest. January feels like a beginning. New year, fresh start, clean slate. We want it to mean something. Our pattern-seeking brains find significance in transitions, and the calendar provides an irresistible one every 12 months.
Financial media has its own incentives. Journalists need stories. The January barometer delivers a reliable annual hook with historical data, apparent accuracy rates, and a clear angle.
And confirmation bias works quietly in the background. When January rises and the year ends well, we remember. When the pattern breaks, we forget or explain it away. The hits accumulate in memory while the misses fade.
Simple rules feel actionable. "Watch January" is easier than "build a diversified portfolio, keep costs low, stay invested through volatility, and ignore short-term noise." One requires patience and discipline. The other fits in a headline.
What matters for investors
No calendar-based indicator reliably predicts markets. Not January. Not any other month.
Some researchers have pointed to valuation measures as better alternatives. Kim and Byun found in 2018 that the Shiller CAPE ratio showed stronger predictive power than the January barometer. But even here, caution is warranted. Valuation metrics can stay stretched for years, even decades. Being "right eventually" isn't much use if you're sitting in cash while markets double.
The honest answer is unglamorous. What predicts long-term investment success has nothing to do with reading market signals.
Costs matter. Every pound you pay in fees is a pound that isn't compounding for you. Diversification matters. Spreading risk across regions, asset classes, and styles protects you from concentrated disasters. Time in the market matters far more than timing the market. And behaviour matters most of all. The investors who do best are often the ones who do least.
The January barometer offers the illusion of control. These factors offer the real thing.
"The January barometer offers the illusion of control. These factors offer the real thing."
What to do instead
Ignore the January barometer. When you see it mentioned this month, and you will, keep scrolling.
Resist the itch to act on monthly market movements. One month of returns is noise. Thirty days of data cannot tell you what the next 335 will bring.
If you want to do something useful in January, make it something you can control. Review your fees. Are you paying more than you need to for funds that aren't delivering anything special? Check your asset allocation. Has drift pulled you away from your target? Increase your contributions if you can. The best time to invest is usually now.
The January barometer promises a shortcut. There isn't one.
The January barometer was never useful
That broken thermometer we started with? It was never measuring anything useful. It just happened to read "warm" in a place that's usually warm.
The January barometer works the same way. Its impressive-sounding accuracy rate is an illusion created by markets that rise most years anyway. When researchers tested it properly, extending the timeframe, checking other countries, comparing it to other months, the effect disappeared. What looked like a signal was noise with a good story attached.
You don't need to predict what markets will do this year. Nobody can, regardless of what January delivers. The investors who build wealth over decades aren't the ones spotting patterns in tea leaves. They're the ones who keep costs low, stay diversified, and resist the urge to act on every new headline.
So when the January barometer predictions arrive over the coming weeks, you can watch with detachment. You'll know something the commentators aren't telling you.
The thermometer is stuck. It always was.
Resources
Cooper, M. J., McConnell, J. J., & Ovtchinnikov, A. V. (2006). The other January effect. Journal of Financial Economics, 82(2), 315-341.
Brown, L., & Luo, L. (2004). The predictive value of the signs of January returns: Evidence of a new January effect.
Darrat, A. F., Li, B., & Chung, R. (2013). The other month effect: A re-examination of the "other January" anomaly.
Marshall, B. R. (2008). How accurate is the January barometer?
Bohl, M. T., & Salm, C. A. (2010). The other January effect: International evidence. European Journal of Finance, 16(2), 173-182.
Easton, S., & Pinder, S. (2008). A refutation of the existence of the other January effect.
Marshall, B. R., & Visaltanachoti, N. (2010). The other January effect: Evidence against market efficiency? Journal of Banking & Finance, 34(10), 2413-2424.
Kim, K., & Byun, J. (2018). Stock return predictability and seasonality.
Recently on TEBI
Free investment education that actually follows the evidence
Our YouTube channel delivers what you won't get from the financial media — no stock tips, no market predictions, just research-backed insights that help you invest sensibly. Subscribe now for weekly videos.
