120 likes | 143 Views
Avoiding Dangers of AI. Ernest Davis SIROS-2 March 30, 2017. Points. AGI and the Singularity are nowhere near [The IoT is the great short-term danger] Conventional AI can be very dangerous The people who know about this are the Computer Security people.
E N D
Avoiding Dangers of AI Ernest Davis SIROS-2 March 30, 2017
Points • AGI and the Singularity are nowhere near • [The IoT is the great short-term danger] • Conventional AI can be very dangerous • The people who know about this are the Computer Security people. • The Asilomar principles are a start, but need to be strengthened • Two additional principles
AGI and the Singularity are nowhere near Despite remarkable accomplishments, current AI is very far from understanding perception, text, or the world.
[The Internet of Things] The scariest realistic technology-based disaster movie set in 2020 would be about a cyber-attack that took over the cars, airplanes, trains, elevators, and household appliances. Stuxnet shows that, even with paranoid security, a hostile power can turn all your centrifuges into bombs.
Conventional AI can be very dangerous AI-powered: • Military equipment • Cyberattacks • Online theft • Etc. could be very dangerous, either due to malice or to incompetence. But they are not fundamentally different from non-AI-powered attacks of the same kind.
The people who do computer security • Have been studying these problems for 50 years. • Have a sophisticated and powerful theory and technology. • Know the power and limits of their theory and technology. Those are the people we should be talking to.
Asilomar principle #21 “Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.” Planning and mitigation efforts? We should stay miles away from anything close to catastrophic or existential risks.
Asilomar principle #22 “Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.” I would prefer: No AI system should be designed to recursively self-improve or self-replicate with a recursion depth greater than 1 without human intervention.
Davis principle #1 Any AI program or robot working on or near earth should be designed so that an authorized human can easily halt or destroy it, in a way that it cannot block; preferably, in a way that it cannot comprehend.
Davis principle #2 The reasoning powers and knowledge of an AI program that controls great physical power should be extremely limited. E.g. an AI that controls the power grid should only know about the power grid.