AI is developing so quickly it’s hard to keep up. Just last week, Arm, Microsoft, NVIDIA, Oracle, and OpenAI announced plans to invest $500 billion in AI infrastructure; and Chinese startup DeepSeek released two impressive open-source AI models. Here are a few big picture thoughts on what I’ve been reading.
Sometimes it feels like we’re rushing headlong into the future, that we’re not in the driver’s seat but just along for the ride. We never chose, as a society, to build AI. We’re doing it anyway, because the incentives are just too great not to. If we solve the technical challenges of AI development, we’ll still have to confront the political question of who controls it and who benefits from its use. If AI mostly makes Sam Altman richer—and doesn’t really benefit the rest of us—it will be a huge waste of time and energy. The truth is the technology is indifferent to whether it’s used to help us or to exploit us; someone profits either way.
Leopold Aschenbrenner, “Situational Awareness”
“Over the past year, the talk of the town has shifted from $10 billion compute clusters to $100 billion clusters to trillion-dollar clusters. Every six months another zero is added to the boardroom plans. Behind the scenes, there’s a fierce scramble to secure every power contract still available for the rest of the decade, every voltage transformer that can possibly be procured. American big business is gearing up to pour trillions of dollars into a long-unseen mobilization of American industrial might. By the end of the decade, American electricity production will have grown tens of percent; from the shale fields of Pennsylvania to the solar farms of Nevada, hundreds of millions of GPUs will hum.”
Last June, Leopold Aschenbrenner wrote that his connection to the world of cutting edge AI research—Aschenbrenner was on the Superalignment team at OpenAI—gives him a special “situational awareness” of nearness of superintelligence. You don’t have to buy into the hype around any particular AI model; you can simply project past progress into the future to see that AI is nearing human intelligence across almost every domain. If AI progress continues at its current pace—and just projecting trend lines is often a good way to forecast the future—it’s “strikingly plausible” it will be able to do the work of an AI researcher by 2027. My intuition is that diminishing returns to intelligence—as the problems we have yet to solve get harder and harder and begin to push up against hard physical constraints—will mean that dramatic increases in AI intelligence won’t result in the extraordinary increase in capability many expect. We’re probably not a few orders of magnitude away from godlike power, but rather a few orders of magnitude away from impressive advancements. But powerful AI could still give the state that develops it a significant strategic advantage, setting the stage for a dangerous, high-stakes AI race between the US and China in coming years.
Bruce Sterling, “Preliminary Notes on the Delvish Dialect”
“The upshot of this effort is a new dialect. It’s a distinct subcultural jargon or cant, the world’s first patois of nonhuman origin. This distinctive human-LLM pidgin is a high-tech, high-volume, extensively distributed, conversational, widely spoken-and-read textual output that closely resembles natural human language. Although it appears as words, it never arises from ‘words; — instead, it arises from the statistical relationships between ‘tokens’ as processed by pre-trained transformers employing a neural probabilistic language model. And we’ll be reading a whole lot of it. The effort to spread this new, nonhuman dialect is a colossal technical endeavor that ranks with the likes of nuclear power and genetically modified food. So it’s not a matter of your individual choice, that you might choose to read it or not to read it; instead, much like background radioactivity and processed flour from GMO maize, it’s already everywhere.”
Science fiction writer Bruce Sterling says that the language large language models (LLMs) use has its own uncanny character “like the deep-woods twittering of hallucinatory machine-elves.” Sterling calls this language “Delvish” because of their peculiar tendency to use certain words—like “delve”—more often than humans do. AI models sound like this because they’re tuned—in much the same way ad copy is—to seem “honest, helpful, and harmless.” It’s a put-on. Delvish mainly serves not to communicate useful information or provide entertainment—although it sometimes does those things incidentally—as to be a cheap substitute for them. It’s the ultra-processed food of communication: not nutritious but pervasive.
Max Read, “Drowning in Slop” (New York)
“In the nearly two years since, a rising tide of slop has begun to swamp most of what we think of as the internet, overrunning the biggest platforms with cheap fakes and drivel, seeming to crowd out human creativity and intentionality with weird AI crap. On Facebook, enigmatic pages post disturbing images of maimed children and alien Jesuses; on Twitter, bots cluster by the thousands, chipperly and supportively tweeting incoherent banalities at one another; on Spotify, networks of eerily similar and wholly imaginary country and electronic artists glut playlists with bizarre and lifeless songs; on Kindle, shoddy books with stilted, error-ridden titles (The Spellbound Quest: Students Perilous Journey to Correct Their Mistake) are advertised on idle lock screens with blandly uncanny illustrations.”
There has always been a lot of crap on the internet. The latest wave of slop is not that different from previous waves in some ways. Generative AI is simply a new tool for producing socially worthless spam. But AI is able to propagate this spam in new ways and at a new scale. There’s now a whole ecosystem of fake scientific journals that publish fake papers with fake citations. Amazon sells algorithmically-generated books filled with gibberish by algorithmically-generated people. Search engines serve us fake news and fake facts. Generative AI has become the technological engine of a massive economy built around making us look—around “monetizing engagement”—rather than producing anything of value. The rising tide of slop devalues everything; it breeds a cynical nihilism that allows authoritarianism to grow. (
has a fantastic substack you can read here.)Let me know in the comments what other articles about AI you think are particularly worth reading. Telling the Future depends entirely on the support of readers, so if you enjoyed this—or just found it briefly diverting!—please consider buying a paid subscription. You can also always help spread the word by sharing Telling the Future with others.
Well, no one has told me why my take on AGI is wrong so far. Care to take a stab at it? https://dpiepgrass.medium.com/gpt5-wont-be-what-kills-us-all-57dde4c4e89d