9 Comments
Jul 29, 2023ยทedited Jul 29, 2023Liked by Robert de Neufville

You've treated the five scenarios as mutually-exclusive, but they're not. It's like when Fry is on the moon asks whether it'll be minus 173 "Fahrenheit" or "Celcius", and the moon-farmer responds "first one, then the other".

I think the most likely scenarios that emerge out of AGI involve Futurama/Dystopia followed (possibly soon after, but probably decades later) by Singularia/Paperclipalypse:

https://dpiepgrass.medium.com/gpt5-wont-be-what-kills-us-all-57dde4c4e89d

In the meantime, I keep wondering how the disinformation and scamming potential of generative AI will play out in the absence of tools to raise the sanity waterline (https://forum.effectivealtruism.org/posts/fNKmP2bq7NuSLpCzD/let-s-make-the-truth-easier-to-find)... any thoughts?

Expand full comment
Jun 17, 2023Liked by Robert de Neufville

I love your concrete forecasts on Scott Aaronson's 5 scenarios. I would give a much higher chance of paperclipsia and singularia, but I can't say exactly why, the arguments are very theoretical and not based on recent developments.

Expand full comment
Jun 17, 2023Liked by Robert de Neufville

What a great post Robert!

Expand full comment

You write, "AI will probably improve the world in many ways, but will probably also have some dramatic downsides."

And when the downsides become large enough they will contain the potential to erase the improvements.

Nuclear weapons have sobered the great powers and discouraged them from engaging in direct conflict with each other, which may have prevented a repeat of WWII, a huge benefit. But the price tag is that a single human being can now destroy the modern world in a matter of minutes. And if that price tag remains in place indefinitely, sooner or later the bill will likely come due.

The issue of scale is all important, because it makes thinking like the following obsolete....

You write, "My guess is that on balance the benefits of AI will outweigh the costs"

Having more benefits than costs doesn't matter if the cost is the collapse of the system which creates benefits.

It's unknown whether AI in particular will crash the system. What is known is that AI will act as a further accelerant to the knowledge explosion. And if the knowledge explosion continues to accelerate without limit, sooner or later something will emerge from that process that will crash the system.

It's unknown exactly how much change human beings can successfully manage. But if change continues to accelerate without limit, then whatever our limits of ability may be, sooner or later those limits will be exceeded.

Expand full comment