Photo by Immo Wegmann on Unsplash
Last year, Nvidia's stock price tripled. OpenAI is building a $500 billion supercomputer. Microsoft just committed another $80 billion to AI infrastructure. Meanwhile, nobody can quite explain how any of this actually makes money.
We're watching the largest capital expenditure in tech history unfold in real-time, and almost nobody is talking about the elephant in the room: What if this doesn't work?
The Compute Obsession That's Reshaping Silicon Valley
The current AI boom is, fundamentally, an energy and compute problem dressed up as an intelligence problem. Training modern large language models requires absolutely staggering amounts of computational power. GPT-4 reportedly cost somewhere between $100 million and $1 billion to train—though nobody will confirm the exact figure because it's become a weird flex of financial desperation.
Companies have started throwing money at this problem like it's going out of style. Nvidia, which makes the GPUs that power nearly all AI training, has become the most valuable semiconductor company on Earth. Their data center revenue alone pulled in $60.9 billion in 2024. That's not a typo. That's their data center division specifically.
But here's where it gets weird: The returns on these massive infrastructure investments remain mysteriously fuzzy. Meta spent over $30 billion on AI last year and still hasn't figured out how to make money from it. Google threw countless billions at Gemini and Bard, which have maybe 2% of ChatGPT's market share. Even OpenAI, the poster child of AI success, has never publicly disclosed actual profitability numbers.
Sam Altman—the CEO of OpenAI—has been unusually candid about this. He's basically admitted that the current training paradigm hits a wall. You can't just keep throwing more compute at the problem forever. The returns diminish. The costs explode. Eventually, physics and economics collide.
Why Everyone Is Pretending This Is About Artificial General Intelligence
There's a narrative that conveniently sidesteps the profitability question: We're building toward AGI—Artificial General Intelligence. A theoretical future where AI systems can do anything a human can do. If that happens, the argument goes, then spending $500 billion on infrastructure now doesn't matter. It'll be the best investment humanity ever made.
It's a seductive story. It also happens to be perfectly unfalsifiable in the short term.
Notice how the timelines for AGI keep shifting? Five years ago, researchers said we were decades away. Three years ago, some suggested it could arrive by 2030. Now the speculation ranges from 2025 to 2100 depending on who you ask. That's not convergence on a real prediction. That's a goalpost that moves whenever it gets close to being reached.
The AGI narrative serves a crucial function though: It allows companies to justify spending that would otherwise look reckless. If you're building toward godlike AI that solves all of humanity's problems, then $80 billion sounds cheap. If you're just trying to make ChatGPT slightly smarter than it already is? That becomes harder to justify.
This might sound cynical, but consider that every major player in this space—Microsoft, Google, Meta, Amazon—has positioned itself as essential to AGI development. They've all got skin in the game. They've all got incentives to keep the money flowing.
The Efficiency Problem Nobody Wants to Admit
Here's something that troubles AI researchers but rarely makes headlines: Modern language models are wildly inefficient. We're training them with techniques that haven't fundamentally changed in years—just scaled up to absurd proportions. It's like answering the question "How do we get better results?" exclusively by throwing more money at the problem.
A human brain uses about 20 watts of power. GPT-4 reportedly needed exponentially more energy to train and continues guzzling electricity every time you ask it a question. Not because it's inherently smarter than humans. But because we haven't figured out how to build efficient reasoning systems at scale.
Some researchers are exploring completely different approaches—reinforcement learning, embodied AI, neuromorphic computing—but these don't generate the hype or the venture capital like "bigger transformer models" do. They're also harder to understand in an earnings call. When your investors are waiting for returns, talking about fundamentally rethinking AI architecture instead of scaling up the existing one reads like stalling.
The uncomfortable truth is that we might already be close to the easy gains in this space. The next breakthrough might require genuine innovation rather than just more NVIDIA chips. And innovation is unpredictable. It can't be scheduled. It can't be guaranteed with enough funding.
Why This Matters Beyond Tech Stock Prices
This isn't just finance nerd stuff. This infrastructure spending is reshaping the world in concrete ways. AI training facilities are consuming enormous amounts of water and electricity, stressing power grids and water supplies. Nvidia's supply chains have geopolitical implications—their chips are now strategic assets, which is why the US government restricts their sale to certain countries.
The concentration of AI power in a handful of companies funded by the same venture capital ecosystem creates a strange monoculture. Everyone is building bigger versions of the same thing. The diversity of approaches that usually drives real innovation gets squeezed out when billions of dollars are only available if you're building toward the same vision.
And if—and it's still an if—this spending binge doesn't actually lead to proportional advances in AI capability, we might be looking at the biggest wasted capital allocation since the dot-com bubble. That wouldn't be a minor correction. That would reshape entire companies, destroy a lot of investor wealth, and probably make AI funding genuinely difficult for years afterward.
If you're curious about AI's current failure modes, the weird reality of AI hallucinations reveals a lot about what's actually happening under the hood.
What Happens Next Matters More Than You'd Think
The next two to three years will probably determine whether this infrastructure spending looks brilliant or delusional in retrospect. Either we'll see genuine breakthroughs that justify the investment, or we'll see the narrative start to crack as the returns stubbornly refuse to materialize.
There's also a possibility of a middle ground: The technology keeps improving gradually, but not dramatically. Good enough for enterprise use cases and consumer products, but not transformative. That scenario actually might be the worst from an investment perspective—it'd be profitable enough to keep going, but not profitable enough to justify the capital deployed.
What we probably won't see is honesty about the actual financial returns. That's not how corporate earnings calls work. We'll get carefully worded statements about "early traction" and "long-term positioning" and "strategic bets on the future." Translation: "We're spending billions and we're not entirely sure if it's working yet, but we're committed now."
That's not necessarily wrong. Real innovation does require bets that don't always pay off. But it's worth understanding that a significant portion of the AI hype we're experiencing isn't based on proven returns. It's based on hope, narratives about AGI, and the very human tendency to assume that because something is expensive and everyone's doing it, it must be important.
The poker game continues. The chips keep getting pushed into the middle of the table. And everyone's pretending they can see the cards.

Comments (0)
No comments yet. Be the first to share your thoughts!
Sign in to join the conversation.