7 Comments

History has shown us that with every leap in automation, fears of unemployment surge, yet time and again, these fears have been unfounded as new job categories emerge. Current laws already address criminal misuse of tools, AI included. Overregulating AI could unnecessarily hinder innovation. Let's focus on adaptive enforcement of existing laws and remain optimistic about the potential for AI to generate new economic opportunities.

Expand full comment
Nov 8, 2023Liked by Robert de Neufville

I didn't really understand the issues behind AI, and hadn't been following it's nefarious implications carefully. This was really helpful!

And unsettling.

Expand full comment
Nov 7, 2023Liked by Robert de Neufville

We need to shift some focus away from particular emerging technologies of concern to the underlying process which is generating such powers, the knowledge explosion.

Without such a shift of focus we will be trapped in never ending attempts to make each new emerging power safe, one by one by one. The history of nuclear weapons, genetic engineering, and now AI suggests that new powers of vast scale will emerge faster than we can figure out how to make them safe. If true, then a one by one by one approach to technological safety seems a loser's game.

Let's take the most optimistic view of the future of AI, and imagine that it is somehow made perfectly safe. This is very unlikely, but as a thought experiment, let's imagine it anyway. What are the implications?

Perfectly safe AI would be a further accelerant to the knowledge explosion, just as computers and the Internet have been. The real threat may not be AI itself, but whatever new technologies are to emerge from this accelerated knowledge development process.

How much of today's world could those alive a century ago in 1923 have imagined? That's most likely the same position we are in today in regards to the coming century. The 21st century is still young, and AI will not be the last power of vast scale to emerge from the knowledge explosion.

All the many discussions of AI safety are necessary, but they also tend to distract us from the real challenge. An accelerating knowledge explosion is producing new technologies of large scale faster than we can figure out how to make those technologies safe. Making AI safe won't solve that problem.

Almost all the commentary I've seen on AI safety treats AI as if it were an isolated phenomena, instead of just the latest power of large scale to emerge from the knowledge explosion. Real safety will require us to shift our focus from particular products rolling off the end of the knowledge explosion assembly line to the knowledge explosion itself.

To those who will inevitably say that such a shift of focus is hard, it's difficult, it's unprecedented, it's impossible etc, I can only offer this....

Nature doesn't care whether we find facing this survival challenge to be convenient. Nature's plan is to simply remove any species which is unable, or unwilling, to adapt to changing conditions.

Expand full comment

Robert de Neufville writes...

"We certainly shouldn’t leave decisions about technological development just to messianic venture capitalists; we have to make them collectively, as a society."

https://tellingthefuture.substack.com/p/could-ai-eat-the-world

Robert's statement raises the question of whether collectively, as a society, we are capable of making intelligent decisions about technologies the scale of AI. It sounds nice and politically correct etc to say that the public should be involved in such decisions. But shouldn't we first ask whether we're up to the job? If a reader should answer yes to that question, then perhaps they would tackle this question too...

Are human beings capable of making intelligent decisions about ANY emerging technology, no matter how powerful it is, or how fast it comes online? Are we proposing that there is NO LIMIT to the powers we can successfully manage? If there is a limit, where is that limit?

I would argue that the fact that we rarely ask such questions with any seriousness, let alone arrive at credible useful answers, is pretty good evidence that we are not ready for powers the scale of AI and genetic engineering. But if you're not up for future speculation on this question, we could examine the historical record.

Thousands of massive hydrogen bombs could erase the modern world in minutes, a well known, easily understood threat of Biblical scale that we rarely find interesting enough to discuss. Is that successful management? Does that sound like intelligence to you?

Still not convinced? Ok, so try this?

About half of America will vote for Trump for the third time if given the chance.

Expand full comment