8 Comments
User's avatar
Darin Tuttle's avatar

Robert,

glad to see you back and writing again! It would be great for you to comment on the different platforms to make forecasts, Metaculus, GJ Open, or the many others out there!

Expand full comment
Robert de Neufville's avatar

Thanks, Darin! I still haven't tried a lot of the different platforms, but it's definitely something I should think about doing.

Expand full comment
Phil Tanny's avatar

Congrats on your grant Robert, and good luck with the future progress of your blog. I'm definitely not an authority of such things, but as best I can tell somewhere between 5-10% of free subscribers will typically be willing to convert to paid. If you hear different numbers, please share.

As to the future...

I'm obsessed with the following question, which perhaps you might address in your writing at some point.

QUESTION: Is the marriage between violent men and an accelerating knowledge explosion sustainable?

It seems much about the future will be decided by this question. If that marriage isn't sustainable, then some fundamental change that is hard to imagine today would seem to be necessary to have a future worth living in.

If this interests you I'll look forward to hearing your thoughts.

Expand full comment
Robert de Neufville's avatar

Thanks Phil.

Yes, I believe Substack's guidance is that 5-10% of free subscribers convert to paid subscriptions. I imagine it varies a lot from newsletter to newsletter, but I'm hoping the pledge feature will make it easier to capture how many potential paid subscribers I have.

That question seems to be at the heart of existential risk forecasting. I think the issue is not just the human propensity for violence but the extent to which we fail to consider the long-term impact of our decisions on our collective future. I wrote a paper arguing that AI risk is largely a collective action problem (https://www.sciencedirect.com/science/article/abs/pii/S0160791X2100124X), and I think it's broadly true of existential risk more generally. I also recommend Nick Bostrom's Vulnerable World paper (https://nickbostrom.com/papers/vulnerable.pdf) if you haven't read that already.

Expand full comment
Phil Tanny's avatar

Regarding the FLI letter... Do you think we're going to figure out how to make AI safe in the next 6 months? Do you think there will even be a pause in development?

Isn't it more likely that some people will make some nice sounding vague statements about inevitably ineffective governance schemes while the AI industry pushes ahead as fast as possible?

Do we give a teenager the keys to the car in the hopes that someday he'll learn about safe driving? If we were to apply the same simple common sense logic we routinely apply to teenagers to the AI industry, we'd soon discover that none of these people are really experts.

Eliezer Yudkowsky comes to mind as an exception. Maybe he's not the ideal spokesman for that point of view, but he seems to be on the right track. Shut it down until we figure out how to make it safe. That could take decades.

Expand full comment
Robert de Neufville's avatar

Sorry it took me so long to reply to this, Phil. I definitely don't think we're going to figure out how to make AI safe in the next six months. I don't think the signatories believe the pause is a solution to the problem of AI risk, so much as a thing that many people can agree on we can do to slow down reckless development and raise awareness of the issue. I don't think we'll get a complete pause by any means, but I think it may make companies like OpenAI pause some of its research. If nothing else, by shining a light on the issue the letter makes it awkward for any company that tries to roll out a buggy new large language model right now—especially since surveys suggest most people agree they're moving to fast on AI development.

Expand full comment
Phil Tanny's avatar

Interesting.

While not claiming to be a psychic, I tend to base my speculation about the future of AI by examining how we've dealt with nuclear weapons. The technologies are of course very different, but we who have developed both technologies have not really changed in any fundamental way.

Competition caused us to mass produce nukes. Once that was done it became increasingly clear that we have no clue how to reverse the decision. And so we became bored with that challenge, and turned our attention to the creation of additional existential risks, like AI and genetic engineering.

Given this history, I must admit I don't find any AI governance scheme to be credible. To me, it seems we are just repeating the mistakes we made with nukes, that is, building the technology at break neck speed before we have any idea how we're going to manage it.

In your paper you write, "Similarly, efforts to slow AI development can be harmful by limiting progress"

I would counter argue that limiting progress is just what we should be doing. Or to put it another way, learning how to take control of the pace of the knowledge explosion would be progress.

You write, "Ensuring that AI generally contributes to good outcomes for society will require collective action."

As I understand it, some uses of AI will be beneficial, and some will not. What we can learn from nukes is that as the scale of such technologies grows, the room for error shrinks, and we are ever more in the situation where the bad uses have the potential to erase the benefits. One bad day with nukes, game over.

I'm glad I found you. I anticipate learning a lot here, and look forward to further exploring our shared interests.

Expand full comment
Robert de Neufville's avatar

Thanks, Phil. I'm glad you subscribed.

I signed the Future of Life Institute's open letter calling for a major AI experiments to be paused (https://futureoflife.org/open-letter/pause-giant-ai-experiments/) precisely because I think we probably should trade off the potential benefits of faster AI development to lower the risk of potentially serious costs.

Expand full comment