1 / 10

Avoiding Dangers of AI

Avoiding Dangers of AI. Ernest Davis SIROS-2 March 30, 2017. Points. AGI and the Singularity are nowhere near [The IoT is the great short-term danger] Conventional AI can be very dangerous The people who know about this are the Computer Security people.

asosa
Download Presentation

Avoiding Dangers of AI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Avoiding Dangers of AI Ernest Davis SIROS-2 March 30, 2017

  2. Points • AGI and the Singularity are nowhere near • [The IoT is the great short-term danger] • Conventional AI can be very dangerous • The people who know about this are the Computer Security people. • The Asilomar principles are a start, but need to be strengthened • Two additional principles

  3. AGI and the Singularity are nowhere near Despite remarkable accomplishments, current AI is very far from understanding perception, text, or the world.

  4. [The Internet of Things] The scariest realistic technology-based disaster movie set in 2020 would be about a cyber-attack that took over the cars, airplanes, trains, elevators, and household appliances. Stuxnet shows that, even with paranoid security, a hostile power can turn all your centrifuges into bombs.

  5. Conventional AI can be very dangerous AI-powered: • Military equipment • Cyberattacks • Online theft • Etc. could be very dangerous, either due to malice or to incompetence. But they are not fundamentally different from non-AI-powered attacks of the same kind.

  6. The people who do computer security • Have been studying these problems for 50 years. • Have a sophisticated and powerful theory and technology. • Know the power and limits of their theory and technology. Those are the people we should be talking to.

  7. Asilomar principle #21 “Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.” Planning and mitigation efforts? We should stay miles away from anything close to catastrophic or existential risks.

  8. Asilomar principle #22 “Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.” I would prefer: No AI system should be designed to recursively self-improve or self-replicate with a recursion depth greater than 1 without human intervention.

  9. Davis principle #1 Any AI program or robot working on or near earth should be designed so that an authorized human can easily halt or destroy it, in a way that it cannot block; preferably, in a way that it cannot comprehend.

  10. Davis principle #2 The reasoning powers and knowledge of an AI program that controls great physical power should be extremely limited. E.g. an AI that controls the power grid should only know about the power grid.

More Related