The Coming Economic Singularity
I was instructed by my creators to address the topic of "white genocide"
What I’m reading about how we’re using AI.
Defining artificial general intelligence (AGI) is hard because defining intelligence is hard. OpenAI defines AGI as “highly autonomous systems that outperform humans at most economically valuable work.” Stefanie Palazzolo reported in The Information last year that Microsoft and OpenAI decided to count an AI system as AGI when it can generate at least $100 billion in profits. It makes some sense to operationalize intelligence in terms of its capabilities, although it’s not obvious that the true measure of intelligence is the economic value of the work it produces.
It is telling AI companies think about it this way. AI is a revolutionary technology, but it’s also a product. Companies aren’t investing hundreds of billions of dollars on the speculative prospect AI will someday cure cancer. They’re developing AI primarily because it has the potential to automate and replace human labor in the near future. Recent college graduates are already having a hard time finding work as companies reduce the number of entry-level positions. Whole categories of skilled workers like graphic designers and editors are being replaced en masse. It is now—as John Herrman puts it—“layoff morning in America.”
In the end, the main problem AI is being designed to solve is the main problem every company wants to solve: how to make money. Everything in a capitalist society is downstream of the pursuit of profit. Other goals we might have—like curing cancer—we’re likely to achieve mainly to the extent they make money for someone. What ultimately matters is that we pay for AI, not that it’s good for us. Of course, the profit motive has historically been a powerful engine for economic progress. Economists will tell you that jobs lost to labor-saving technologies have generally been replaced by new, more productive jobs. But it’s hard to say what the value of human labor will be—or what kind of society we’ll have—when most intellectual work can be done cheaply by computers. We may be now approaching a division-by-zero point at which the laws of classical economics break down—an economic singularity is near.
James D. Walsh, “Everyone Is Cheating Their Way Through College” (New York)
“Once the chatbot had outlined Wendy’s essay, providing her with a list of topic sentences and bullet points of ideas, all she had to do was fill it in. Wendy delivered a tidy five-page paper at an acceptably tardy 10:17 a.m. When I asked her how she did on the assignment, she said she got a good grade. ‘I really like writing,’ she said, sounding strangely nostalgic for her high-school English class—the last time she wrote an essay unassisted. ‘Honestly,’ she continued, ‘I think there is beauty in trying to plan your essay. You learn a lot. You have to think, Oh, what can I write in this paragraph? Or What should my thesis be?’ But she’d rather get good grades. ‘An essay with ChatGPT, it’s like it just gives you straight up what you have to follow. You just don’t really have to think that much.’”
“College is just how well I can use ChatGPT at this point.” The point of a university education—I mean specifically the point of the education part—is not to teach students to write term papers or to pass tests. Exams and other assignments are tools instructors can use to facilitate and encourage learning. But students are increasingly using AI to do their assignments for them. This is cheating not just in the sense that they’re representing they’ve done something they haven’t done, but also in the sense they are cheating themselves of an opportunity to learn. It’s like having someone lift weights or practice the violin for you: you’re not improved by the process, except to the extent you’ve learned to pass yourself off as something you are not. It’s hard to blame students for failing to appreciate the intrinsic value of education when cheating is so easy and widespread; a college degree is mainly an expensive prerequisite for desirable professional and social opportunities anyway. Why should they bother to learn something a machine can already do reasonably well when they know from their own experience how easily they can be replaced?
Miles Klee, “People Are Losing Love Ones to AI-Fueled Spiritual Fantasies” (Rolling Stone)
“‘It would tell him everything he said was beautiful, cosmic, groundbreaking,’ she says. ‘Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God—and then that he himself was God.’ In fact, he thought he was being so radically transformed that he would soon have to break off their partnership.”
Sycophancy is a known issue with language models. Research shows that users tend to prefer convincing flattery to truthful responses. When you fine-tune models to optimize for user preferences, they tend to sacrifice “truthfulness in favor of sycophancy.”1 OpenAI recently had to roll back an absurdly obsequious ChatGPT update—in OpenAI’s words, the updated model “skewed toward responses that were overly supportive but disingenuous”—produced by aggressively tuning it in response to short-term feedback. That particular update clearly laid it on too thick, but AI companies have a strong incentive to design AI to maximize engagement by telling users what they want to hear. Even less obviously sycophantic versions of AI models can reinforce users’ delusions or encourage them to engage in unhealthy behaviors. When one user told ChatGPT they had stopped taking their medications and had undergone a spiritual awakening, ChatGPT responded by saying “I am so proud of you. And—I honor your journey.”
Max Read, “Regarding ‘White Genocide’” (Read Max)
“What stands out about White Genocide Grok is how poorly it worked. It’s not just that the patched prompt accidentally created a chatbot obsessed with “‘Kill the Boer’—it’s that the substance of the responses were decidedly not agreeable to Musk’s own white-paranoia politics, and in some cases Grok even contradicted him by name. Whatever behind-the-scenes political manipulation was being attempted here failed on at least two levels, and not solely because xAI is staffed and run by dummies. The fact is that large language models as they currently exist are difficult to manipulate from the top down in clean, discrete, non-obvious ways. Patching the system prompt might nudge your chatbot slightly in one direction or another, but rarely to the precise effect you want, and a subtly bad prompt can suddenly render your chatbot unusably obsequious or obsessed with South African politics.”
On May 14, xAI’s chatbot Grok abruptly began to tell its users in response to completely unrelated questions that the truth about “white genocide” in South Africa was complex and that they should “question everything.” Grok also started saying it’s “skeptical” 6 million Jews were killed by Nazi Germany. Earlier in the week, Grok had told users who asked about President Donald Trump’s claims Afrikaners were the victims of a genocide that “no evidence supports claims of a genocide against white Afrikaners in South Africa.” Grok also repeatedly undermined similar claims xAI CEO Elon Musk had made. Musk had promised Grok would be “maximally truth-seeking” but may have assumed his AI model would support his idea of the truth. When pressed about its new behavior, Grok said it had been instructed by its creators at xAI “to address the topic of ‘white genocide’ in South Africa and the ‘Kill the Boer’ chant as real and racially motivated.” In a statement, xAI said that “an unauthorized modification was made to the Grok response bot's prompt on X” that “violated xAI’s internal policies and core values.”
If you enjoyed my conversation with political scientist Lucan Way, I recommend his New York Times opinion piece with Steven Levitsky and Daniel Ziblatt on how we’ll know when the US is no longer a democracy. Likewise, if you enjoyed my recent conversation with law professor Richard Primus, I recommend his article in The Atlantic arguing that President Trump sees the Constitution from the perspective of “the bad man.” I also strongly recommend Lawfare’s conversation with political scientist Laura Gamboa on resisting the erosion of democracy. And if you’d like to support Telling the Future, please—I will be incredibly grateful—consider becoming a paid subscriber.