Something Big
AI comes at you fast

We need to take developments in AI seriously because they’re about to affect all of us.
Something big is supposedly happening. In a recent viral essay, an AI company founder named Matt Shumer compared the current moment to February 2020. While a few of us saw the COVID pandemic coming at that time, most people had no idea everything was about to shut down. As someone on the front lines of AI development, Shumer has seen the latest models automate the technical aspects of his job and is convinced they’re about to automate our jobs too. They’re not able just to perform a few narrow tasks, but can do almost any cognitive work humans can do. As hard as it might be to believe, Shumer says, we’re on the verge of a transformative shock “much, much bigger than COVID.”
Right now AI is for most of us just a helpful, moderately useful toy. We might use it to draft an email or make a social media avatar. We have to squint to see AI’s impact on major macroeconomic indicators. It’s easy to dismiss AI hype. But the latest AI tools have dramatically increased the speed of software development in recent months. They can competently perform a wide range of tasks that until recently had to be done by a human. I think Shumer is probably exaggerating only slightly when he says that “if it even kind of works today, you can be almost certain that in six months it'll do it near perfectly.”
If you’re a believer, this looks like the start of a “takeoff” scenario in which AI development accelerates exponentially as it is increasingly able to improve its own capabilities. One way or another—I’m personally skeptical we’ll get sustained exponential growth—AI tools are likely to get better fast for some time. As they’re more widely and effectively deployed, we’re going to see a burst of dramatic progress on any problem that can plausibly be solved with software. Anything that requires a similar expert judgment to writing code—and that probably includes a lot of geopolitical forecasting—is likely to be automated in the near future. It’s hard to say how many jobs will disappear completely, but the nature of many of our jobs will change. We’ll probably spend more time supervising AI agents and less time performing routine tasks ourselves. It won’t happen overnight, but I think we should be ready for something big.
Eddy Keming Chen, Mikhail Belkin, Leon Bergen, David Danks, “Does AI Already Have Human-Level Intelligence?” (Nature)
“We think the current evidence is clear. By inference to the best explanation—the same reasoning we use in attributing general intelligence to other people—we are observing AGI of a high degree. Machines such as those envisioned by Turing have arrived.”
In a comment in Nature, a group of professors with backgrounds in philosophy, computer science, and linguistics led by Eddy Keming Chen argue that we have achieved artificial general intelligence (AGI). While AI has different intellectual strengths and weaknesses than we do, it nevertheless exhibits human-level general intelligence. Large language models (LLMs) can no longer reasonably be dismissed as “stochastic parrots” that just generate plausible-sounding sentences. They’ve already met or exceeded most of our expectations for what AGI could do. They’ve passed many of the strongest tests we can devise, to the point where we’ve had to design new tests to highlight their weaknesses. If anything, LLMs are strong evidence imitating intelligent behavior is a powerful technique for developing intelligence. But I don’t think it’s just insecurity about our place in the universe that makes us reluctant to declare current AI models “AGI.” Their performance is not merely uneven, but deficient in crucial ways. In particular, they can’t reliably generalize about things they’ve never seen before and struggle to solve problems they haven’t extensively trained on. As a result, they tend to fail in real world situations we feel any intelligent system should be able to handle easily. They are still—at least for the moment—better at relatively narrow problem solving than at general purpose reasoning.
Dario Amodei, “The Adolescence of Technology”
“I think the best way to get a handle on the risks of AI is to ask the following question: suppose a literal ‘country of geniuses’ were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people, all of whom are much more capable than any Nobel Prize winner, statesman, or technologist. The analogy is not perfect, because these geniuses could have an extremely wide range of motivations and behavior, from completely pliant and obedient, to strange and alien in their motivations. But sticking with the analogy for now, suppose you were the national security advisor of a major state, responsible for assessing and responding to the situation. Imagine, further, that because AI systems can operate hundreds of times faster than humans, this ‘country’ is operating with a time advantage relative to all other countries: for every cognitive action we can take, this country can take ten.”
“Humanity,” Anthropic CEO Dario Amodei writes, “is about to be handed almost unimaginable power, and it is deeply unclear whether our social, political, and technological systems possess the maturity to wield it.” Amodei often compares powerful AI to “a country of geniuses in a data center” with the potential—either because it is beyond our control or is misused or is simply disruptive—to do great harm. You don’t have to believe we’re close to producing AI geniuses to see the problem: it’s hard enough to ensure a country of ordinary humans acts in a way that’s aligned with our collective interests. Amodei’s discussion of the risks is measured and thoughtful, but still seems naive. In his own view, we need reasonable government regulation to ensure that AI is developed safely and isn’t misused. But if AI geniuses are coming as soon as he thinks, we’re probably not going to get meaningful AI safety regulation—which most AI labs oppose anyway—in time to make a difference. Amodei likewise wants the US to have the most powerful AI to use in defense of democracy, but doesn’t seem aware that the US is no longer clearly on the side of democracy.
Anthropic—which has aggressively courted military contracts—signed a $200 million deal last July with the Defense Department to integrate its Claude models into classified military systems. But not long after Amodei posted his essay Anthropic refused to renegotiate that contract to allow the military to use Claude for “all lawful uses.” Specifically, the Defense Department wanted Anthropic to remove language that would prevent Claude from being used for mass surveillance of Americans or in fully autonomous weapons systems—both uses Amodei had singled out in his essay as potentially dangerous. After Anthropic refused, President Trump announced he was directing every federal agency to stop using Anthropic’s tools. The Defense Department also designated Anthropic a “supply chain risk to national security”—the first time that designation had been applied to an American company—which may prevent companies that work with the Defense Department from doing business with Anthropic.
The Defense Department says it simply doesn’t want private companies to dictate how it uses their products. But The Atlantic reports the military may in fact intend to use AI to analyze bulk personal data collected from Americans and to make targeting decisions on its own. These would arguably be “lawful uses” since there’s virtually no law governing new AI capabilities. OpenAI has already agreed to provide its tools in Anthropic’s place anyway. The government is probably right that it shouldn’t be up to private companies to determine how we can use AI, but it shouldn’t just be up to the current administration either. We need new laws to address the new uses of AI before its too late.
Thank you for reading Telling the Future! Related posts include “Through AI, Darkly,” “The Coming Economic Singularity,” and “The Future Starts Now.” I also recommend Dario Amodei’s recent conversation with Dwarkesh Patel and Gideon Lewis-Kraus’ profile of Claude in The New Yorker. If you found this post valuable, please consider supporting my work by becoming a paid subscriber.




Sometimes it's good to be older. My heart goes out to the 20-and 30-somethings.