A Hitchhiker’s Guide to the AI Bubble
“The competition for AGI—AI that surpasses humans at all cognitive tasks—is of fundamental geopolitical importance.”
That’s The Economist, last week. Not some breathless tech blogger or venture capitalist talking their book. The world’s most prestigious economic publication. Notice the framing – it treats AGI as a foregone geopolitical contest.
They’re not wrong about the competition. They’re just wrong about what we’re competing for.
I started coding again last year. First time in 13 years. Not because I believe AGI is coming. I think it’s alchemy-level nonsense. I started because I suddenly could. Because somewhere between the $560 billion in AI infrastructure spending and the endless debates about consciousness, something genuinely revolutionary happened: machine learning became boring infrastructure.
Boring is the highest compliment I can give technology. Boring means it works. Boring means you stop thinking about how and start thinking about what. Electricity is boring. TCP/IP is boring. And now, after all the hype and terror and mysticism, AI is getting boring too.
But you wouldn’t know it from reading the headlines. When former prime ministers are writing op-eds about the AGI race, you know the fantasy has captured everyone – media, politicians, markets. They’re so busy staring at artificial general intelligence that they’re missing the actual revolution happening at ground level.
Two stories are unfolding simultaneously. One is a spectacular bubble built on geopolitical panic and sci-fi fantasies. The other is the quiet transformation of how we build everything. When the bubble narrative pops, the buildout accelerates.
The Evidence on the Ground
Nine months ago I started coding again. I’d been building systems since the 80s – architected investment fund migrations from mainframes to networked PCs in the City, built ERP for trading firms, then spent over a decade in enterprise consulting. But I hadn’t written production code since 2012.
Within weeks I built a serverless system processing 5 million social media posts daily, tracking topic clusters and emerging narratives in real-time. Then brand monitoring dashboards. Then a “robojournalist” that could deep-dive any trending story. Then hardware and firmware specs for a coffee machine. Then my first mobile app.
Not toy projects. Real systems. In the time it used to take to set up a development environment.
Thirteen years away from code, and within weeks I was shipping production systems in languages I’d never used. The tools had evolved that much.
Scroll through any tech community and you’ll see senior developers emerging from semi-retirement like coders coming out of carbonite. People who’d graduated to PowerPoint and architecture diagrams, who barely touched an IDE in over a decade, are suddenly shipping products.
The vibe-coding community gets this. While we debate AI’s impact in boardrooms, they’re already building the future on Discord and shipping it to production. Yes, they’re creating security nightmares and accidentally deleting production databases. Of course they are. They’re inexperienced people wielding power tools. When we gave everyone electric saws, emergency rooms saw more accidents too. That’s not an argument against power tools.
While established developers debate whether AI will replace them, these kids are shipping. Developers who learned their craft in the age of pull requests and sprint planning sneer at their security failures, not realizing that ‘best practices’ are about to flip again. The barbarians aren’t at the gate. They’re deploying to production. And honestly? I’m a born-again barbarian myself.
And the patterns they’re creating – spec-driven development, AI-first workflows – are already being productised by big tech. The innovation is flowing upward.
I’m not alone in seeing this. As Christina Wodtke, Stanford lecturer and early web pioneer, recently noted: ‘The old timers who built the early web are coding with AI like it’s 1995. The same people who ignored crypto and rolled their eyes at NFTs are building again. When developers who’ve seen every tech cycle since Gopher start acting like excited newbies, that tells you something’.
And it’s not just about code. The other week GPT suggested I make jam from some obscure regional Thai plums I’d bought. Tiny things, sour as hell, no English name. I’d never made jam before. Took a photo of the carton, got the variety identified, received a recipe calibrated for their specific sourness (even spotted this meant no pectin needed), then real-time guidance that adjusted based on photos of my pot. “Needs 3-4 more minutes,” it said, looking at the bubble pattern. It was right. This pattern – expertise on demand – is transforming everything from cooking to coding.
I work in Asia and see it daily: non-English speakers using AI as professional infrastructure. The language barrier just vanished – in both directions. People composing, analyzing, and creating across languages at native level. The data backs this up: over 80% of ChatGPT traffic comes from outside the US, with massive usage in India, Brazil, Japan. The economic implications are staggering.
Students aren’t asking “Is this cheating?” They’re asking “How do I build with this?” They’ll spend 40 years in the workforce. By the time they retire, working without AI will seem like working without electricity.
Something real is happening. Not in the research labs or board rooms where they debate ASI timelines. But in the million small moments where people discover something new they can do. The old saying ‘Be realistic, demand the impossible’ was never more true.
The Inevitable Evolution
In 2018, if you wanted to use ML, you hired PhDs and bought GPUs. Custom everything.
By 2020, you could rent pre-trained models from OpenAI or Google. But integration was still bespoke. Every company had different APIs, different formats, different assumptions.
Then something shifted. The models converged on common patterns. Chat formats. System prompts. JSON modes. Consistent parameters. OpenRouter emerged to abstract away vendor differences. AWS Bedrock unified access to multiple models. Anthropic’s MCP is pushing standard tool interfaces and has been adopted by everybody. What looked like competition was actually standardisation.
Watch what happened to prices once standards emerged:
– GPT-3 (2020): $60 per million tokens
– GPT-3.5 (2022): $2 per million tokens
– GPT-3.5 (2024): $0.07 per million tokens
That is the price curve you get when a capability becomes infrastructure.
The clearest sign: how new tools are built. Cursor runs their own prompt optimization but routes to commodity LLMs for the heavy lifting. Replit does the same. They’re not trying to compete with OpenAI on model training. They’re building experiences on top of commodity LLMs.
This is textbook platform evolution – what Simon Wardley has been mapping for years. Standardization enables scale. Scale drops prices. Low prices increase adoption. Adoption creates ecosystem.
We see it today: LLM APIs for understanding. Embedding models for similarity. Vector storage for search. Components from different vendors. What cost millions to build custom in 2018 now costs hundreds to assemble. My social media analysis system is built entirely from such commodity blocks.
The AGI crowd misses this completely. They’re debating consciousness while the ML stack is commoditising under their feet. They’re worried about ‘superintelligence’ while developers are treating AI as just another API to call.
The real revolution isn’t making machines think. It’s making them boring enough that nobody has to think about them.
The AGI Arms Race That Isn’t
That 16:1 AI investment ratio only makes sense if the winner takes everything.
Which is exactly what everyone believes.
Rishi Sunak writes op-eds about democratic values in the AGI race. The White House issues executive orders about AI safety. China announces AI supremacy targets. The EU drafts regulations for systems that don’t exist. Everyone’s racing for permanent technological supremacy.
This is your bubble. Not the technology – the shared delusion that someone’s about to achieve irreversible computational dominance.
The panic has a patient zero. When Geoffrey Hinton quit Google to warn about AI risk, he didn’t just change jobs. He transformed a technology discussion into an existential race. Suddenly every major power faced a terrifying question: What if our enemies get AGI first?
Sam Altman knew exactly which buttons to push. Congressional testimony about the need for regulation (from the company furthest ahead). Warnings about AI risk. OpenAI’s playbook: Build in public, warn about dangers, present yourself as the responsible actor who needs resources to “do it safely.”
It worked. The Stargate announcement – $500 billion for AI infrastructure – is the logical endpoint of this narrative. When you believe you’re racing for permanent species-level advantage, no amount is too much. The ‘long-termists’ have everyone convinced we’re at the hinge of history.
But here’s what’s curious about existential arms races: they’re incredibly profitable for arms dealers.
Jensen Huang needs governments to panic-buy GPUs. Sam Altman needs infinite capital for compute. Microsoft, Google, and Amazon need regulatory moats only they can afford. Every warning about AGI danger is also a pitch deck for more funding.
During the Cold War, the US and Soviets would leak reports about UFOs and mind control programs. Deliberate misdirection to waste enemy resources. The AGI race has the same dynamics – except this time, everyone’s falling for their own propaganda.
At least the missile gap was about real missiles.
Why We Can’t See the Revolution
The most interesting part of Ed Zitron’s recent 14,000-word AI takedown isn’t what he gets wrong. It’s how he gets it wrong.
He spends thousands of words debunking AGI hype, then judges every AI product by AGI standards. He dismisses “agents” because they’re not fully autonomous. He mocks chatbots for not being conscious. He’s so busy fighting the fantasy that he misses the reality.
He’s not alone. The entire discourse has been captured by AGI framing. Critics and believers alike judge current AI by science fiction standards. It’s like dismissing cars because they don’t fly.
This is what Baldur Bjarnason called the “LLMentalist effect” (great article!) – we’ve projected consciousness onto pattern matching. The chatbots seem so human that we can’t help but evaluate them as minds rather than tools. Even skeptics fall into the trap, spending more time debating whether they’re “truly” intelligent than asking whether they’re useful.
Real revolutions happen gradually, then suddenly. In 1996, if you asked for proof the internet would change everything, what could anyone show you? Amazon selling books? Email replacing faxes? The transformative applications hadn’t been invented yet because the infrastructure didn’t exist.
We’re in the same moment now. People demand to see the AGI-level breakthrough while missing the million small transformations already happening. My social media analysis system would have been impossible five years ago. Not impractical – impossible. The components didn’t exist at any price.
Every week, developers discover new patterns. Natural language as a universal interface. Semantic search replacing keyword matching. Complex reasoning chains that actually work. Not consciousness, but capability after capability that wasn’t there before.
The revolution doesn’t care about the philosophy debate. By the time everyone agrees on definitions, it’ll be over.
The skeptics and believers are having the wrong argument. Two sides of the same shitcoin. The C-suite fence-sitters performing ‘balanced perspective’ are hardly better – they’re debating both sides of the wrong question. The question isn’t whether we’ll create AGI. It’s whether we’ll notice that we don’t need to.
When the Music Stops
The AGI bubble will pop. Not because the technology fails, but because the fantasy can’t survive contact with reality.
The trigger could be anything. A major AI company admitting they’re nowhere near AGI. A government realising they’ve been stockpiling GPUs for a race that doesn’t exist. Or simply investors noticing that $560 billion for $35 billion in revenue isn’t a business model – it’s a cargo cult.
When it happens, the narrative collapse will be spectacular. All those breathless headlines about consciousness and superintelligence will age like dot-com era predictions about the “new economy” where profits didn’t matter. The Stargate project will become this generation’s Webvan – ambitious, well-funded, and built on false premises.
But here’s what the doomsayers miss: the infrastructure remains. After the dot-com crash, we still had fiber optic cables, data centers, and trained engineers. The speculation died. The internet didn’t.
Same pattern here. When the AGI fantasy evaporates, we’ll still have:
– Models that can read, write, and analyze
– APIs that cost pennies to call
– A generation of developers who know how to build with them
– Actual products solving actual problems
The companies that survive won’t be the ones promising AGI. They’ll be the ones who understood early that ML is just really useful when available as infrastructure. Like the difference between Pets.com and Amazon – one promised to change the world, the other was building warehouses.
Medieval alchemists never turned lead into gold. But while chasing that impossible dream, they invented chemistry. They failed at transmutation but succeeded at something more valuable: understanding how the world actually works.
Same story, new century. The AGI labs won’t crack consciousness. But chasing the ghost in the machine, they’ve built infrastructure that changes everything.
The revolution isn’t coming. It’s here. It turns out the bubble was the wrapper all along.
Conclusion: The Answer Was Always 42 (But What Was the Question?)
So what do you do with this knowledge?
If you’re a developer: build. The tools are here, they’re cheap, and they’re getting better every week. While everyone else debates consciousness, ship products. The barbarians aren’t knocking – they’re already through the door with mud on their boots.
If you’re a business: ignore the noise. Everyone has strong opinions about AI, and they’re mostly wrong. While experts argue and vendors overpromise, focus on what works today. That boring automation, that small efficiency gain, that better interface – these compound. The companies that win won’t be waiting for clarity. They’ll be the ones who started with simple tools and learned by doing.
If you’re an investor: you understand the moat dynamics – infrastructure players need scale, builders need distribution, data, or workflow lock-in. The sharpest tell might be this – which companies thrive even if AGI is formally proven impossible tomorrow? The best bets could be those with sustainable economics today: firms selling to AI developers, companies where ML just quietly improves margins, businesses solving real problems that happen to use AI. They’re building for the world where AI is infrastructure, not magic.
If you’re a government: yes, model sovereignty matters. You need domestic compute and models you control. But the race isn’t for AGI – it’s for practical ML capability. The question isn’t whether to invest in infrastructure, but how much is enough. With open models improving and inference costs plummeting, the barriers are lower than the panic suggests. Build what you need, not what the arms race demands.
The hardest part isn’t understanding the technology. It’s seeing past the narrative.
When the AGI bubble pops – and it will – the pundits will act shocked. How did we spend $560 billion chasing digital consciousness? How did The Economist fall for it? How did governments stockpile GPUs for a race that couldn’t be won?
But by then it won’t matter. The builders will have inherited the infrastructure. The vibe-coders will be running production. Your competitors will be shipping features you thought impossible. And everyone will pretend they knew all along that the real revolution was never AGI.
It was making intelligence so boring that nobody thinks twice about using it. Just like electricity. Just like the internet. Just like every transformation that actually mattered.
The future is already here. You just have to stop looking for it in the wrong place.