9 Comments
Jul 29, 2023·edited Jul 29, 2023Liked by Robert de Neufville

You've treated the five scenarios as mutually-exclusive, but they're not. It's like when Fry is on the moon asks whether it'll be minus 173 "Fahrenheit" or "Celcius", and the moon-farmer responds "first one, then the other".

I think the most likely scenarios that emerge out of AGI involve Futurama/Dystopia followed (possibly soon after, but probably decades later) by Singularia/Paperclipalypse:

https://dpiepgrass.medium.com/gpt5-wont-be-what-kills-us-all-57dde4c4e89d

In the meantime, I keep wondering how the disinformation and scamming potential of generative AI will play out in the absence of tools to raise the sanity waterline (https://forum.effectivealtruism.org/posts/fNKmP2bq7NuSLpCzD/let-s-make-the-truth-easier-to-find)... any thoughts?

Expand full comment
author

That's absolutely fair, David.

I really didn't specify these scenarios in a rigorous, clearly exclusive way. I'm treated these scenarios as near future endpoints, but glossed over the details of transitional states. I agree that a temporarily stable Futurama or Dystopia could eventually evolve into a Singularia or Paperclipalypse, but considered that beyond the scope of this forecast. I think most of the people who predict a Singularia or Paperclipalypse think it will happen fast, but, like you, I don't think that's necessarily the case for the same reasons that I don't think either scenario is inevitable.

It's not obvious to me that that disinformation will win, although I am concerned. We've been getting scammed throughout recorded history, but as scams have gotten more sophisticated, we've gotten more sophisticated about identifying them. I'm skeptical that there's any procedure we can use to reliably ensure we're getting to the truth, but we can and should work to develop better information hygiene.

Expand full comment
Jun 17, 2023Liked by Robert de Neufville

I love your concrete forecasts on Scott Aaronson's 5 scenarios. I would give a much higher chance of paperclipsia and singularia, but I can't say exactly why, the arguments are very theoretical and not based on recent developments.

Expand full comment
author

Thanks, Dan. I'd be interested in why you think the Paperclipalypse and Singularia scenarios are more likely. I barely scratched the surface of the topic here, but I'm much more skeptical than many people of the idea that AI would be able to rapidly improve itself to the point that it was smarter not just than a single person but than humanity collectively. Increasing computing power by absorbing hardware overhang and improving algorithms will probably yield rapid improvements initially, but I strongly suspect there will be diminishing returns to both fairly quickly. And I think further improvements will require not just rearranging bits, but actually building infrastructure, the speed of which will be limited by hard physical constraints. Sometimes people imagine that AI will increase global GDP growth to 30%/year or more, but that seems extremely unrealistic to me. I just don't think there are that many easy gains to be made even for a superintelligent manager.

Expand full comment

I find it useful to think back a century to 1923. Those alive at that time couldn't have imagined much of what was to come through the rest of the 20th century. I suspect we're in the same position today, maybe more so.

Everyone is focused on AI right now because it's the shiny new toy of the moment. But the real threat may not come so much from AI itself, but from whatever emerges from an AI accelerated knowledge explosion. We already don't know how to make nuclear weapons, AI and genetic engineering safe. And we're likely going to be adding more such challenges on to the pile, one on top of another.

If the pattern of last century continues, we'll likely become ever more prosperous, living in an ever more dangerous environment.

Expand full comment
Jun 17, 2023Liked by Robert de Neufville

What a great post Robert!

Expand full comment
author

Thanks, Al!

Expand full comment

You write, "AI will probably improve the world in many ways, but will probably also have some dramatic downsides."

And when the downsides become large enough they will contain the potential to erase the improvements.

Nuclear weapons have sobered the great powers and discouraged them from engaging in direct conflict with each other, which may have prevented a repeat of WWII, a huge benefit. But the price tag is that a single human being can now destroy the modern world in a matter of minutes. And if that price tag remains in place indefinitely, sooner or later the bill will likely come due.

The issue of scale is all important, because it makes thinking like the following obsolete....

You write, "My guess is that on balance the benefits of AI will outweigh the costs"

Having more benefits than costs doesn't matter if the cost is the collapse of the system which creates benefits.

It's unknown whether AI in particular will crash the system. What is known is that AI will act as a further accelerant to the knowledge explosion. And if the knowledge explosion continues to accelerate without limit, sooner or later something will emerge from that process that will crash the system.

It's unknown exactly how much change human beings can successfully manage. But if change continues to accelerate without limit, then whatever our limits of ability may be, sooner or later those limits will be exceeded.

Expand full comment
author

Certainly, if the system collapses then the costs won't outweigh the benefits! But I don't think the system is likely to collapse in the foreseeable future and don't believe we'll see indefinite exponential growth. Of course, our system won't last forever; nothing does.

Expand full comment