Telling the Future
Talking About the Future
Peter Wildeford on Forecasting AI Progress
0:00
-53:20

Peter Wildeford on Forecasting AI Progress

"I'm coming around to the maybe-lines-just-keep-going-up-indefinitely thing"

In this episode of Talking About the Future, I talk with forecaster

about AI progress. Peter is a co-founder of both the Institute for AI Policy and Strategy and Rethink Priorities, and is on the board of Metaculus. He also has his own Substack newsletter, , and is on Twitter here. You can listen to our full conversation—and find out how soon we expect AGI to arrive—using the audio player above or on most podcast platforms. Excerpts from our conversation, edited for clarity, are below.

So let's start by talking about AI. It seems like we might be at a pivotal moment right now. There's a lot of buzz coming out of Silicon Valley and elsewhere. What do you think the chance we develop artificial general intelligence (AGI) by 2030 is? And how would you even go about answering a question like that?

PW: Yeah, I think this is a pretty difficult question, actually. It really gets at the heart of what it means to forecast something. It’s pretty easy to throw out a number there. But you're going to probably want to know, what does that number mean and where does it come from? Because I think people throw out “AGI” like a buzzword, but there's a lot of different definitions, and how you think about it really matters to what you mean by AGI. So I think my first answer would just be, what does AGI mean to you?

I was actually hoping to ask you that question because it's a problematic term in a lot of ways. It's defined in fairly fuzzy ways, including as something that makes a certain amount of money. I use the term because that's what people are talking about. I think, broadly, I would say it probably means AI that has the capacity of a human being or of a smart human being across many domains, but that is still pretty vague. And sometimes you hear people say things like, it should be able to win a Nobel Prize in every field, which to me—that's superintelligence, right? Because I don't know if John von Neumann could do that. How would you define AGI?

PW: I've definitely seen a lot of different definitions out there. So you can talk about how many different fields it can automate and replace human labor in. But then you might have questions about how well it can do those fields, like what degree of quality, can it earn Nobel Prizes, and then is it cost competitive with humans or is it still more expensive? Or maybe it's much cheaper. But maybe for this conversation we can think about the labor impacts. Maybe an AGI system can replace, say, 99% of all current remote worker jobs as of today, March 11th, 2025. And, yeah, I guess you might have a question about whether there will be new jobs that humans can do—and I guess let's also say the AGI has to do this in a cost-competitive way. So, basically, if you got this AGI, I feel like it would be hard to say there still would be a lot of human jobs because basically any remote job a human could do this AGI could do better and cheaper....

March 14, 2025 tweet by Kevin Roose reading, "Yep, I'm AGI-pilled." Attached is a preview of an article entitled, "Why I'm Feeling the A.G.I."

There are things in the human world like population growth where there are larger, stable trends we can potentially predict further in advance. Some people argue that projecting AI progress is a little bit like that, that you can draw these trends—as we scale up an order of magnitude, you can see benchmarks improving linearly, and we can extrapolate out a few years. This is something Leopold Aschenbrenner tried to do in the “Situational Awareness” paper he wrote last year, basically arguing—and a lot of people argue this—that if you keep extrapolating those benchmarks, sometime in the next 3-5 years you're going to get to something like superhuman performance across most of them. I'm a little skeptical that that works, partly because I'm not sure the scaling continues indefinitely, but also partly because I think there may be some kind of thing that's not captured by performance on some of those benchmarks. Do you think that's right? What is your uncertainty about whether or not that comes to pass?

PW: I'm certainly giving it a lot more respect now than I used to, because I was calling the top on different scaling things for the past few years, and scaling nonetheless keeps continuing despite my objections. And so now I'm coming around to the maybe-lines-just-keep-going-up-indefinitely thing. You may have heard of Moore's Law, this idea that the transistor count on computer chips doubles every so often. The crazy thing is that Moore's Law has held up for multiple decades, but it was originally designed with, I think, just like six data points, and then it was just extrapolated into the future.

Moore’s Law has held through massive technological transformations. And as we keep thinking we're reaching a limit, somebody finds a new breakthrough that manages to keep it going.

PW: Yeah. It's kind of crazy. Trust me, if I could put six lines on a graph and draw a straight line and then just be right for 40 years, I'd be a superbillionaire by now. It’s just really unfathomable that this would actually work. Like it never really works. But like you said every time people think Moore's Law is over, someone comes up with a new trick. and it just remains on trend, as if powered by the Lord Almighty himself. And who knows if scaling is kind of like this. Maybe AI really does just stick to these lines, and these lines keep going up, and every time we think they're about to stop, someone comes up with a new line. I really did think pre-training was running its course and that this would lead to a model capability slowdown in 2024, and I was almost right, except for those meddling reasoning models that just emerged out of seemingly nowhere and then just started putting up really great numbers on the evals again, and now we're off to the races a second time with o1 giving way to o3, and then who knows what o4 will do. Maybe this will run its course sometime soon, but then there's some other form of training that we've never even heard of that doesn't even exist that comes out of nowhere next year. And then models keep on chugging. I guess this is part of why it just feels really hard to make great judgments about this with a high amount of accuracy. So I guess as a forecaster thinking in terms of distributions, I feel like my distribution ends up all over the place from this is going to fizzle out this year to we are going all the way to superintelligence in under a decade. Either thing feels totally plausible to me right now....

Help me make more podcasts like this by becoming a paid subscriber!

One big question is, when we get to this threshold, what's going to happen next? A lot of people have pretty fantastic, almost religious ideas of what's about to happen. What do you think? Do you believe in what you’d call a fast takeoff scenario where AGI recursively self-improves and becomes super, super AI rapidly?

PW: I guess it depends on how rapid. I think some people, like I think Eliezer Yudkowsky, when they're talking about fast takeoff with recursive self-improvement, they're thinking about an AI that keeps figuring out how to improve itself and then becomes rapidly superhuman in a matter of months, if not a year. I guess other people might think it self-improves to the degree that a human does, which is it takes decades, if not centuries to actually make giant leaps in scientific progress similar to how humans do. I'm kind of somewhere in between. I think that it won't be super easy for an AI to recursively self-improve. But I think you could see, if this AI is replacing a lot of human jobs and automating everything, that one of those human jobs that's going to be replaced is work on making algorithmic progress in AI systems and work on designing better and better chips. And, of course, AI systems are not going to be magic. They're still going to be limited by physical realities of hardware and the amount of time it takes to build out these factories. Maybe they'll be more efficient at this construction, but they still will have some bottlenecks in the real world. But I think if you're automating all AI research and development, and you can replace your existing 1,000-person human workforce with 1,000,000 plus AI systems working 24/7 in perfect coordination without needing to eat or sleep, I think it's hard to say you're not going to get tremendous returns out of that. So I think that some sort of fast—faster than historical base rates, much faster than historical base rates—kind of takeoff would be my median expectation, but not super-Yudkowsky level takeoff. Because like I said I still think there's going to be some real world constraints on just how much R&D you can actually do....

March 17, 2025 tweet by Ethan Mollick reading, "The amount of capability overhang in current AI systems is hard to overstate, even in narrow areas like vision & image creation.  If AI development stopped today (and no indication that is happening), we have a couple decades of figuring out how to integrate it into work & life."

I don't want to be known as a big regulation guy, but I do think government has some function. There are some public problems that have to be solved collectively, and that's what governments should do. I'm maybe not super optimistic about our particular governments. I'm also not super optimistic about the way the international system is handling this. So if we're going to get something transformative in the next few years, I would say the possibility that we have a strong international regime that is going to ensure that it goes well is almost 0%. Is that too pessimistic?

PW: Looking back at history, I think every time humans have encountered problems, we've tended not to deal with them in an extremely foresighted manner. We tend to sort of tackle them right before it's too late. There wasn't a ton of action on COVID until case counts started getting a bit high. And then there were massive lockdowns—I think some people have argued that governments even did too much after that—these massive lockdowns worldwide. It’s arguable how much they helped. I think they helped a good amount. But that's an example. I think nuclear weapons—there was widespread coordination around the Non-Proliferation Treaty, the test ban treaties, and other forms of arms control. But this only happened after the United States dropped two bombs on Japan. Prior to that, there wasn't really any regulation, any international arms control, so you really did need that giant demonstration. Maybe one form of hope is the Biological Weapons Convention, which led to a lot of countries destroying their biological weapon programs and stockpiles. Luckily that was accomplished without needing some sort of really compelling demonstration. But I think for the most part—like even with climate issues, I think people tend to have a fairly pessimistic view on climate issues, but we have actually had some big geopolitical coordination wins there with the Montreal Convention, for example. I think a lot of the worst case scenarios, like massive 6°C plus heating events, have been avoided now through geopolitical coordination even if we haven't fully tackled or solved climate change. So I have some optimism that once we've tried everything else, we will finally start doing the right thing. It's the muddle through theory, so to speak.

Share

This newsletter depends entirely on reader support, so if you’d like me to do more interviews like this, please consider becoming a paid subscriber. You can also support my work by giving Talking About the Future five stars on your favorite podcast platform and by sharing this post with others.

March 3, 2025 Bluesky post by Ed Zitron reading, "Word has it DOGE has terminated the five men protecting the cursed seal that holds the five dark sages in their slumber"

Discussion about this episode