The End of the Centaur Era
How The Intelligence Curse changed my mind about the future of work—someone using AI may take your job, but AI may take theirs, and undemocratic institutions may take the future
Imagine graduating from university today. Top marks, previously in-demand skills. You do everything right.
But job offer after job offer evaporates.
Companies may be polite, apologetic even. "We've frozen hiring while we lean into AI," recruiters might say.
So you turn to freelancing. Only to find that AI is rapidly eating freelance jobs too.
So you embrace AI. You’ve heard “AI won’t take your job, someone using AI will.”
Then that comfortable fiction gets shattered by The Intelligence Curse, a new essay that forced me to rethink my assumptions about the future of work and AI’s impact on society.
Stark Vision of a Jobless Future
The essay describes how AI won’t just automate tasks, but hollow out the corporate pyramid itself.
It will start at the base: entry-level jobs where humans add the least marginal value beyond what a model can do. But it won’t stop there. As it improves, it will climb upward—eating into middle management, then specialized knowledge work, then decision-making roles.
Meanwhile, the power will shift. In a world where even star researchers and brilliant strategists can be copied and scaled infinitely, capital—not labor—will become the main lever and bottleneck. The owner of the datacenter, not the owner of the resume, will hold the leverage.
Then, with humans increasingly adding little economic value apart from consumption, we could end up in the equivalent of many of today’s oil-rich nations. Just as they sometimes fall into the "resource curse"—neglecting their citizens because they don't rely on them economically—future firms and states could become indifferent to human welfare once intelligence becomes a cheap, abundant resource.
Where I Used to Stand
I used to believe, quite strongly, that early adopters could ride AI to long-term success. Perhaps for decades. I believed that the key was simple: Learn faster. Adapt faster. Use AI better than anyone else.
In the short term, perhaps for the next five years, I think this could still be true. The people who know how to maximize their use of AI tools today have enormous leverage. They're outproducing their peers. They're building faster, iterating faster, compounding faster.
But the Intelligence Curse forces me to confront the likely end of this dynamic. The better AI becomes—not just in generating words, but in reasoning, planning, adapting—the less room remains for human value-add. AI will eat the pyramid of work from the bottom up.
Today, humans plus AI can outperform AI alone in many domains. But not forever. The same way centaur chess players—humans paired with engines—once beat computers, but today can't survive against Stockfish or Leela Zero, the economic "centaur" will eventually be eclipsed.
When the best standalone AI outperforms the best human-AI team, humans cease to be a necessary input for most economically valuable activities. The centaur era, thrilling and empowering as it feels now, is living on borrowed time. As AI continues to improve, humans will become slower, less precise, less reliable economic agents compared to autonomous systems.
And for those of us who take the near-term prospect of AGI (my guess: early 2026 at the latest) seriously, we must also be serious about the conclusion: There comes a point when even the best centaur loses to pure AI.
What We Must Do Now
The Intelligence Curse doesn’t simply diagnose the problem, it also provides ideas for how society can respond. These include. Averting catastrophic misuse without choking innovation, diffusing diffuse capabilities enough to prevent monopoly without opening Pandora’s box to every bad actor, and democratizing democratizing institutions so citizens retain meaningful power in an age when traditional labor loses economic clout.
From my perspective, it highlighted to me that we should:
Recognize the centaur era for what it is: a golden but temporary window. If you aren’t using AI to extend your leverage today, you are falling behind. If you are, understand: it probably won’t protect you forever.
Build structures that capture AI's value for humans collectively. Investments tied to broad market indexes. Public wealth funds modeled on successes like Norway’s oil fund. Ownership mechanisms that link average citizens to the upside of intelligence capital. And note: You can do this for yourself now, by investing in a broad and diversified index fund, and thereby capture corporate concentration of wealth for yourself.
Defend competition but regulate risk. We must keep innovation decentralized enough to prevent monopolies—but we must also hard-gate access to truly catastrophic capabilities. Open source is a powerful force, but it needs real-world guardrails. In addition to AI alignment, we should invest in physical barriers to AI misuse, such as preventing people from getting access to lab materials with which to make bioweapons.
Rewire our culture before it’s too late. Meaning, status, and purpose must come from something beyond wage labor. We need new social contracts, new narratives of contribution and achievement.
How much time do we have? I don’t think anyone knows for certain. AI is progressing fast, but physical constraints like data centers, GPUs, and energy could slow it down. Alternatively, algorithmic improvements could speed it up.
So it’s probably best to plan for various scenarios, and work to bring about the best.