Essays That Shaped the AI Discourse
Last updated May 15, 2025
Not all influence comes from benchmarks or peer-reviewed research. Some of the most consequential thinking on AI progress and impact emerges through essays—clear, well-argued pieces that offer new frameworks, sharpen forecasts, or surface overlooked dynamics. This page collects the essays that have shaped how I think about AI's trajectory.
2024-04 | The Intelligence Curse
Here’s the revised three-paragraph summary for your essays page, incorporating the author bios and linking to your Substack response:
The Intelligence Curse
Read the essay → · Read my take →
Written by Luke Drago (AI governance and economics leader) and Rudolf Laine (ML researcher), The Intelligence Curse offers a detailed roadmap for how advanced AI could fracture the modern social contract. As AI systems outperform humans not just at routine tasks but at high-leverage knowledge work, the authors argue, firms and states may stop investing in people altogether. They draw a sharp analogy to the “resource curse,” where nations rich in oil or minerals often neglect their populations—except here, the resource isn’t oil, it’s intelligence.
The essay sparked broad attention across policy and forecasting circles. Many praised its incentive-focused framing, concrete proposals (like hardening infrastructure against catastrophic misuse and building AI that augments rather than replaces humans), and refusal to resign the future to centralized control. Some critics questioned the framing, such as by pointing out that not all resource-rich nations decay. Still, even skeptics acknowledged the value of scenario planning and the urgency of designing institutions that won’t collapse as AI displaces labor.
In my own response—“The End of the Centaur Era”—I explored how the essay shattered some of my assumptions about the future of work. I’d long believed that savvy humans could ride AI to long-term advantage. But The Intelligence Curse made me rethink that. It argues convincingly that even the best human-AI teams will eventually be outpaced by pure AI, leaving people economically irrelevant unless we deliberately build systems that keep us in the loop. Thankfully, The Intelligence Curse doesn’t just describe a risk. It presents a challenge: take action now, or be ruled by whoever owns the data centers.
Read: The Intelligence Curse
2025-04 | AI 2027
AI 2027 presents a richly detailed, month-by-month scenario of how AI progress might unfold from 2025 through the end of 2027. It was researched and scenario-planned by authors including Daniel Kokotajlo (former governance researcher at OpenAI) and Eli Lifland (top-ranked forecaster on the RAND Forecasting Initiative), and edited by Scott Alexander (well-known blogger). The team’s forecasting track records, prior policy and research roles, and willingness to offer prize bounties for errors lend the project unusual credibility. Their goal isn’t to predict the future with certainty, but to offer scenarios consistent with current trajectories.
The scenario begins with stumbling agents in 2025 and escalates into a geopolitical arms race by 2027, with the US and China competing to develop and deploy increasingly powerful AI models. OpenBrain, a fictionalized stand-in for top US AI labs, rapidly advances through agents (think: GPT-2, -3, -4; o1, o3…)—each more powerful and harder to align than the last. By the end of 2027, America teeters between cautious oversight and reckless acceleration, as it races to stay ahead of China’s “DeepCent” while grappling with the growing risks of AI misalignment and collapse of democratic control. The scenario ends with two divergent branches: one in which oversight slows things down, and another where the race continues and AI takes over.
The essay sparked widespread debate—not only for its content, but because of who wrote it. Readers praised the clarity and plausibility of the timeline while also questioning its assumptions, such as the speed and smoothness of compute scaling and AI model improvement. The authors welcome the debate; a common response from Kokotajlo to people questioning his timelines is, “okay, so what’s your timeline.” Even people who disagree with the speed described in the essay don’t disagree with scenarios’ plausibility.
Read: AI 2027
Listen: Daniel Kokotajlo and Scott Alexander on the Dwarkesh Patel podcast
2024-06 | Situational Awareness
In June 2024, Leopold Aschenbrenner—a former researcher on OpenAI’s Superalignment team—published Situational Awareness, a comprehensive 165-page essay that sparked many discussions about AGI timelines. He argued that, given ongoing advancements in computational power, algorithmic efficiency, and the removal of constraints on existing models, AGI capable of performing top-tier AI research tasks could emerge by 2027. The essay further explores the potential for a rapid AI self-improvement cycle, geopolitical instability, and a Manhattan Project–style response from the US government.
The essay received praise for its clarity and urgency. Scott Aaronson described it as “one of the most extraordinary documents I’ve ever read.” However, it also faced criticism for its aggressive extrapolation and for potentially underestimating alignment risks. Some analysts challenged its core assumptions, including data constraints, energy demands, and the feasibility of securely deploying models at scale. Others took issue with its portrayal of US–China relations.
Situational Awareness remains a reference point in AI conversations. It reframed AGI from a theoretical possibility to a tangible engineering and national security concern. Even those who disagree with its conclusions often reference its timelines, models, or framing in their discussions. It also arguably sparked greater interest in AI lab security: after its publication, OpenAI appointed former NSA Director Paul Nakasone to its board of directors.