that same week, Eliezer Yudkowsky, considered the founder of the field artificial general intelligence , said he refused to sign that letter because it didn’t go far enough. Instead, he called for a militarily-enforced shutdown of AI development labs lest a sentient digital being arises that kills everyone of us.
The mindset treats decentralization as the way to address those risks. The idea is that if there is no single, centralized entity with middleman powers to determine the outcome of an exchange between two actors, and both can trust the information available about that exchange, then the threat of malicious intervention is neutralized.Read more: Allison Duettmann - How Crypto Can Help Secure AI
It was one thing to expect global coordination around the COVID pandemic, when every country had a need for vaccines, or to expect that the logic ofwould lead even the bitterest enemies in the Cold War to agree not to bring even nuclear weapons, where the worst-case scenario is so obvious to everyone.
I have no idea, of course, whether that’s how things would play out. But in the absence of a crystal ball, the logic of Yudowskly’s AGI thesis demands that we engage in these thought-experiments to consider how this potential future nemesis might “think.”Of course, most governments will struggle to buy any of this. They will naturally prefer the “please regulate us” message that OpenAI’s Altman and others are actively delivering right now.