7 Comments

History has shown us that with every leap in automation, fears of unemployment surge, yet time and again, these fears have been unfounded as new job categories emerge. Current laws already address criminal misuse of tools, AI included. Overregulating AI could unnecessarily hinder innovation. Let's focus on adaptive enforcement of existing laws and remain optimistic about the potential for AI to generate new economic opportunities.

Expand full comment
author

Clearly, we don't want to *overregulate* AI (or even just to regulate AI in a dumb way). But I personally don't think existing laws are close to being up to the task of adequately regulating AI. Every new technology poses new challenges for us to address as a society.

It's certainly fair to point out that *in the long-term* new technologies haven't really reduced the total amount of work available to humans. Labor-saving technologies simply shift work to areas where humans have a comparative advantage. But in the shorter-term new technologies can cause painful labor disruptions. They also can raise serious equity issues related to who benefits from them. I'd also add that the prospect that machines could eventually have an absolute advantage in *all* types of work puts us in somewhat uncharted territory.

It's not at all obvious what we should do to address these issues. But it's not plausible to me just adapting our current laws as we go is sufficient to mitigate the new issues AI raises, particularly since there seems to me to be significant risk of severe, difficult-to-reverse harms from AI.

Expand full comment

I appreciate your long answer and your perspective, and you raise valid concerns about the short-term labor disruptions and equity issues associated with new technologies. However, there is a challenge of regulating a rapidly evolving field like AI.

We are still in the early stages of understanding AI's full capabilities and implications. Creating effective regulations for something we don't fully understand can be problematic. Premature or ill-informed regulations might hinder beneficial advancements or fail to address the actual issues.

I see the risks and I appreciate your concerns - my personal viewpoint is that we will probably do a very poor job regulating it - no matter how well intended we start out.

Expand full comment
author

I don't really disagree with any of this. We will almost certainly get some some truly terrible regulation, in part because we don't understand AI yet and in part because democratic governance can be pretty stupid. But I also think that it isn't a binary choice between bad regulation and no regulation; we can make better or worse choices. And it seems to me that leaving decisions about technology up to Marc Andreessen and Elon Musk—and, in effect, letting foxes guard the hen house—would ensure our choices end up being pretty bad.

Expand full comment
Nov 8, 2023Liked by Robert de Neufville

I didn't really understand the issues behind AI, and hadn't been following it's nefarious implications carefully. This was really helpful!

And unsettling.

Expand full comment

We need to shift some focus away from particular emerging technologies of concern to the underlying process which is generating such powers, the knowledge explosion.

Without such a shift of focus we will be trapped in never ending attempts to make each new emerging power safe, one by one by one. The history of nuclear weapons, genetic engineering, and now AI suggests that new powers of vast scale will emerge faster than we can figure out how to make them safe. If true, then a one by one by one approach to technological safety seems a loser's game.

Let's take the most optimistic view of the future of AI, and imagine that it is somehow made perfectly safe. This is very unlikely, but as a thought experiment, let's imagine it anyway. What are the implications?

Perfectly safe AI would be a further accelerant to the knowledge explosion, just as computers and the Internet have been. The real threat may not be AI itself, but whatever new technologies are to emerge from this accelerated knowledge development process.

How much of today's world could those alive a century ago in 1923 have imagined? That's most likely the same position we are in today in regards to the coming century. The 21st century is still young, and AI will not be the last power of vast scale to emerge from the knowledge explosion.

All the many discussions of AI safety are necessary, but they also tend to distract us from the real challenge. An accelerating knowledge explosion is producing new technologies of large scale faster than we can figure out how to make those technologies safe. Making AI safe won't solve that problem.

Almost all the commentary I've seen on AI safety treats AI as if it were an isolated phenomena, instead of just the latest power of large scale to emerge from the knowledge explosion. Real safety will require us to shift our focus from particular products rolling off the end of the knowledge explosion assembly line to the knowledge explosion itself.

To those who will inevitably say that such a shift of focus is hard, it's difficult, it's unprecedented, it's impossible etc, I can only offer this....

Nature doesn't care whether we find facing this survival challenge to be convenient. Nature's plan is to simply remove any species which is unable, or unwilling, to adapt to changing conditions.

Expand full comment

Robert de Neufville writes...

"We certainly shouldn’t leave decisions about technological development just to messianic venture capitalists; we have to make them collectively, as a society."

https://tellingthefuture.substack.com/p/could-ai-eat-the-world

Robert's statement raises the question of whether collectively, as a society, we are capable of making intelligent decisions about technologies the scale of AI. It sounds nice and politically correct etc to say that the public should be involved in such decisions. But shouldn't we first ask whether we're up to the job? If a reader should answer yes to that question, then perhaps they would tackle this question too...

Are human beings capable of making intelligent decisions about ANY emerging technology, no matter how powerful it is, or how fast it comes online? Are we proposing that there is NO LIMIT to the powers we can successfully manage? If there is a limit, where is that limit?

I would argue that the fact that we rarely ask such questions with any seriousness, let alone arrive at credible useful answers, is pretty good evidence that we are not ready for powers the scale of AI and genetic engineering. But if you're not up for future speculation on this question, we could examine the historical record.

Thousands of massive hydrogen bombs could erase the modern world in minutes, a well known, easily understood threat of Biblical scale that we rarely find interesting enough to discuss. Is that successful management? Does that sound like intelligence to you?

Still not convinced? Ok, so try this?

About half of America will vote for Trump for the third time if given the chance.

Expand full comment