The AI Reckoning: Michael Burry’s Warning on the Next Great Write-Off

The AI Reckoning: Michael Burry’s Warning on the Next Great Write-Off
In characteristic fashion, Dr. Michael Burry has issued a stark warning about the artificial intelligence investment boom, predicting widespread bankruptcies and massive write-offs in what he suggests could become the “Panic of 2026” or 2027. His comment, responding to a bullish AI thesis from analyst Tae Kim, represents more than contrarian posturing—it’s a fundamental challenge to the current market narrative that AI represents an inevitable, profitable revolution comparable to the smartphone era.
The timing is notable. Burry’s warning comes as Nvidia approaches a market capitalization that would make it one of the most valuable companies in human history, as venture capital floods into AI startups at unprecedented rates, and as every major corporation rushes to demonstrate AI credentials to investors. It’s precisely the kind of consensus enthusiasm that has historically preceded Burry’s most prescient calls.
The Bull Case: Tae Kim’s AI Supercycle
To understand Burry’s skepticism, we must first examine what he’s arguing against. Tae Kim’s thesis represents the prevailing optimism in AI investment circles, built on several pillars:
Kim’s argument encompasses a comprehensive vision of AI transformation. He predicts Nvidia’s revenue will accelerate on the strength of new hardware configurations that dramatically increase computing power. He foresees search transitioning to AI chatbots, with ChatGPT eventually monetizing through digital advertising. He believes productivity gains will be substantial and that current losses will transform into profits as compute costs decline. Most significantly, he frames this as analogous to the iPhone revolution—a comparison that suggests not just growth, but a fundamental restructuring of how technology serves humanity.
It’s a compelling narrative. It’s also exactly the kind of narrative that makes Michael Burry nervous.
The Return on Investment Question
Burry’s core assertion—that “return on investment will continue to fall”—cuts to the heart of the AI investment thesis. This isn’t a prediction that AI won’t work or won’t be useful. It’s an economic argument about the relationship between capital deployed and returns generated.
Consider the current state of AI investment. Companies are spending extraordinary sums on GPU clusters, data centers, energy infrastructure, and talent. Microsoft alone has committed to spending roughly $80 billion on AI infrastructure in fiscal 2025. Meta, Google, Amazon, and others are each deploying tens of billions. This represents one of the largest coordinated capital expenditure cycles in corporate history.
- Microsoft: ~$80 billion in AI infrastructure (FY2025)
- Combined Big Tech AI CapEx: Approaching $200+ billion annually
- Nvidia H100 clusters: $250-500 million per major installation
- Energy requirements: Some facilities consuming power equivalent to small cities
Against this investment tsunami, what are the returns? ChatGPT, despite its popularity, reportedly runs at a loss on most user interactions. AI-powered features are being given away for free or minimal cost to drive adoption. Enterprise AI solutions command modest premiums relative to their infrastructure costs. The gap between capital deployed and revenue generated remains enormous and, Burry suggests, widening.
Historical Parallels: The Ghosts of Bubbles Past
Burry’s warning about falling ROI and eventual write-offs echoes patterns from previous technology cycles, particularly the dot-com era. The parallels are instructive, though not perfect.
The Dot-Com Echo
In the late 1990s, telecommunications companies spent approximately $500 billion building fiber-optic networks to support the internet boom. The technology was real, the future was correctly anticipated, and the infrastructure was genuinely needed. Yet, the scale of investment so exceeded near-term demand that most companies went bankrupt, much of the debt was written off, and investors lost fortunes.
The irony? That infrastructure eventually enabled cloud computing, streaming services, and the digital economy we know today. The vision was correct; the investment timeline and scale were catastrophically wrong. Being right about the future doesn’t make you profitable in the present.
Today’s AI investment follows a similar pattern. The technology is remarkable, the long-term potential is genuine, but the capital deployed to capture that future may vastly exceed what can be profitably absorbed in the timeframe investors expect returns.
The Profitability Mirage
Tae Kim argues that “compute performance continues to improve and costs will come down… Today’s loss-making features will become enormously profitable in due time.” This is the classic technology investment thesis: lose money now, make it up on scale and efficiency later.
Sometimes this works. Amazon lost money for years before economies of scale and network effects made it extraordinarily profitable. But for every Amazon, there are dozens of Pets.com, Webvan, and Kozmo.com—companies that burned capital expecting efficiency gains that never materialized at the required scale.
Burry’s skepticism focuses on a crucial distinction: Will efficiency gains outpace competitive dynamics that force companies to give away improvements rather than monetize them? In AI, we’re already seeing this dynamic. When GPT-4 becomes more efficient, OpenAI uses those gains to lower prices or add features rather than harvest profits. When competitors match capabilities, pricing power evaporates.
The Bankruptcy Prediction: “Almost All AI Companies”
Burry’s assertion that “almost all AI companies will go bankrupt” is his most dramatic claim, and it’s worth examining carefully. This doesn’t mean AI disappears or that the technology fails—it means most companies trying to commercialize AI will not generate sufficient returns to justify their capital structure.
Consider the current AI startup landscape. Thousands of companies have raised hundreds of millions, sometimes billions, on valuations predicated on capturing pieces of what’s predicted to be a multi-trillion-dollar market. But several factors work against most of them:
- Concentration of resources: Only the largest players can afford state-of-the-art training infrastructure
- Winner-take-most dynamics: AI capabilities may consolidate around a few foundation models
- Margin compression: Open-source alternatives constantly undermine pricing power
- Integration risk: Big Tech can bundle AI features, making standalone AI products obsolete
- Capital intensity: Continued improvement requires continued massive investment
History suggests Burry may be conservative. In the dot-com crash, approximately 90% of internet companies failed. In the cleantech boom of the late 2000s, the bankruptcy rate among funded companies exceeded 80%. When capital flows freely into a hot sector, most recipients ultimately fail to generate returns justifying their investment.
The survivors will likely be either: (1) the hyperscale players with resources to sustain losses while building moats, (2) companies solving very specific, high-value problems for which customers will pay, or (3) those acquired before their valuations become untenable. Everyone else faces a brutal winnowing.
Timing the Panic: 2026? 2027? Or Later?
Burry’s reference to potential “Panic of 2026” or 2027 is characteristically provocative but deliberately uncertain. The phrase “Does not have to be” is particularly interesting—it suggests the problem is not if, but how it unfolds.
Several catalysts could trigger a broader reckoning:
Earnings disappointments: If companies that have spent billions on AI infrastructure fail to show corresponding revenue growth or margin improvement, investor patience will evaporate. The first major earnings cycle where AI investment is questioned rather than celebrated could mark an inflection point.
Competitive commoditization: As models become comparable in capability, the only remaining competitive lever is price. A race to the bottom in AI service pricing would devastate companies that need premium pricing to justify their infrastructure costs.
Energy and resource constraints: The power requirements for AI training and inference are staggering. If energy costs spike or if regulatory pressure limits data center expansion, the economics worsen considerably.
Use case disappointment: The gap between AI’s capabilities and its practical value in typical business processes may prove larger than anticipated. If the “Cursor for every vertical” doesn’t materialize, or if productivity gains are modest rather than transformational, the investment case collapses.
The “Does Not Have To Be” Caveat
Burry’s final phrase—”Does not have to be”—is easy to overlook but may be the most important part of his statement. It suggests the AI investment bubble is not inevitable destiny but rather a choice we’re making about capital allocation and expectations.
A more rational deployment of AI investment would look different from what we’re seeing. Companies would invest based on demonstrated returns rather than fear of missing out. Valuations would reflect realistic paths to profitability rather than assumed exponential growth. Investors would distinguish between transformative technology and profitable investment opportunities.
The problem is that individual rationality doesn’t prevent collective irrationality. No single company can afford to fall behind in AI investment, even if they’re collectively over-investing. No investor wants to miss the next Google, even if it means funding ninety-nine failures to find it. The result is a self-reinforcing cycle where everyone knows the aggregate investment exceeds rational levels, but no one can unilaterally pull back without career risk.
This is precisely the dynamic that characterizes bubbles. And it’s why Burry’s warnings, however early they may prove to be, deserve serious consideration.
The Write-Off Wave
Burry’s prediction that “much of the AI spending will be written off” may be his most defensible claim. Even if AI ultimately transforms the economy, the amount being spent now likely exceeds what can be productively deployed in the current market.
Consider what “write-offs” might look like:
Stranded infrastructure: GPU clusters purchased for capabilities that became obsolete as models evolved, or for demand that never materialized.
Failed acquisitions: Strategic purchases of AI startups that never justified their price tags or integrated successfully.
Goodwill impairments: As public AI companies see valuations compress, acquirers will be forced to recognize losses on their AI investments.
Debt defaults: Highly-leveraged AI infrastructure investments that assumed revenue growth may face restructuring or bankruptcy.
Venture losses: The current AI startup bubble will leave many funds nursing substantial write-downs when their portfolio companies fail to exit profitably.
The total could easily reach hundreds of billions. That doesn’t mean AI was a mistake—it means we over-invested relative to the technology’s current ability to generate returns.
Why Burry Might Be Wrong (This Time)
Intellectual honesty requires acknowledging the case against Burry’s position. Several factors could invalidate his warnings:
Accelerating practical value: If AI productivity gains materialize faster and more broadly than skeptics expect, revenue growth could catch up to investment levels. Enterprise adoption of AI tools might follow a steeper curve than previous technology cycles.
Efficient monetization paths: Tae Kim’s advertising integration thesis could prove correct. If AI chatbots can effectively monetize attention through ads while maintaining user experience, the economics transform dramatically.
Continued cost declines: Computing efficiency has historically improved faster than most predict. If training and inference costs fall by 10x over the next two years, much of Burry’s ROI concern becomes moot.
Winner-take-most dynamics favoring incumbents: The current wave of investment might not lead to widespread bankruptcies if the Big Tech players who can afford sustained losses ultimately capture most of the value. In this scenario, Burry would be right about startup failures but wrong about systemic write-offs among the major players.
The “iPhone moment” thesis: Kim’s comparison to the iPhone versus Blackberry transition suggests a paradigm shift so profound that conventional economic analysis fails. If AI truly represents a comparable inflection point, current investment levels might prove justified in retrospect.
What Burry’s Track Record Tells Us
Evaluating Burry’s AI warning requires understanding his historical pattern. He has been:
Consistently correct about fundamental problems. The dot-com bubble, the housing crisis, and various smaller dislocations he’s identified have materialized largely as he predicted.
Frequently early on timing. His warnings often precede market corrections by months or years, causing those who act immediately on his analysis to suffer short-term losses.
Focused on structural issues over market timing. He identifies unsustainable conditions rather than predicting specific crash dates, which is why his “Panic of 2026? 2027? Does not have to be” formulation is characteristically hedged on timing.
Willing to publicly acknowledge mistakes. Unlike most market commentators, he admits when his timing is wrong, even when his structural analysis proves correct.
Applied to AI, this suggests we should take seriously his concerns about ROI, potential bankruptcies, and eventual write-offs, while recognizing that the timing of any reckoning is inherently uncertain. The fundamental question isn’t whether Burry will be vindicated, but whether his vindication comes in 2026, 2027, 2030, or beyond—and how much pain occurs in the interim.
The Rational Response
How should investors and observers think about these dueling perspectives? Tae Kim’s bullish AI thesis and Burry’s bearish warning aren’t necessarily incompatible over different time horizons.
It’s entirely possible that:
- AI will eventually transform productivity and create enormous value (Kim is right about direction)
- Current investment levels vastly exceed near-term revenue potential (Burry is right about timing)
- Most AI startups will fail while a few companies capture most value (both are right, about different segments)
- Massive write-offs will occur even as the technology succeeds (right about the journey, not the destination)
The practical implication is that undifferentiated AI investment—buying the sector broadly, funding startups indiscriminately, or assuming all AI companies will succeed—represents enormous risk. The companies that survive and thrive will likely be those with either: (1) the balance sheet to sustain losses indefinitely, (2) specific, defensible use cases with clear paths to profitability, or (3) unique technological advantages that prevent commoditization.
For most investors, the rational approach might be to acknowledge both perspectives: believe in AI’s long-term potential while remaining skeptical about current valuations and near-term profitability. This means being selective rather than indiscriminate, patient rather than momentum-driven, and focused on fundamentals rather than narratives.
The Broader Implications
Beyond investment strategy, Burry’s warning raises questions about capital allocation in modern markets. We’re witnessing one of the largest coordinated investment waves in history, driven by a combination of technological potential, competitive fear, and abundant capital seeking returns.
The efficiency of this capital deployment matters not just to investors but to broader economic productivity. If hundreds of billions are being invested in AI infrastructure that generates minimal returns, that’s capital unavailable for other productive uses. If thousands of talented engineers are building redundant AI chatbots instead of solving other problems, that’s an opportunity cost to innovation.
The question isn’t whether to invest in AI—clearly the technology merits investment. The question is whether we’re investing too much, too fast, in too many redundant approaches. Burry’s warning suggests we are, and that a reckoning is inevitable.
Conclusion
Michael Burry’s warning about the AI investment boom—falling returns, widespread bankruptcies, massive write-offs, and a potential panic in 2026 or 2027—follows a familiar pattern. He’s identifying structural problems in current enthusiasm while acknowledging uncertainty about timing. His track record suggests we should take these concerns seriously, even if his timeline proves early.
The fundamental tension is between AI’s genuine potential and the unsustainable economics of current investment levels. Tae Kim’s bullish thesis may prove correct about AI’s long-term trajectory while Burry’s bearish warning proves correct about near-term financial carnage. The two perspectives aren’t mutually exclusive—they’re operating on different time horizons and focusing on different aspects of the same phenomenon.
What’s certain is that the current scale of AI investment cannot continue indefinitely without corresponding returns. Either revenue will catch up to investment (validating the bulls), or investment will contract to match realistic near-term returns (validating the bears). The “Panic of 2026” may or may not materialize, but a reckoning of some form appears inevitable. As Burry notes, it “does not have to be” a panic—but avoiding one would require a level of coordinated rationality that has eluded markets in previous technology booms.
In the end, we may discover that both men are right: AI will transform the world as Kim predicts, but only after the kind of financial wreckage Burry warns about. The technology will succeed; most of the companies trying to commercialize it will not. Such is the nature of transformative technology cycles—the promise is real, but the path is littered with failure.





