190 likes | 290 Views
Ethics for self-improving machines. J Storrs Hall Mark Waser. Asimov's 3 Laws:. 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
E N D
Ethics for self-improving machines J Storrs Hall Mark Waser
Asimov's 3 Laws: 1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. http://www.markzug.com/
Asimov's robots didn't Improve Themselves. But our AIs (we hope) Will.
How do you design laws for something that will think in concepts you haven't heard of and which you couldn't grasp if you had?
There is no chance that everybody will create their robots with any given set of laws anyhow! Laws reflect goals (and thus values) which do NOT converge over humanity.
Axelrod's Evolution of Cooperation and decades of follow-on evolutionary game theory provide the theoretical underpinnings. • Be nice/don’t defect • Retaliate • Forgive • “Selfish individuals, for their • own selfish good, should be • nice and forgiving”
In nature, cooperation appears whenever the cognitive machinery will support it. • Cotton-Top • Tamarins • (Hauser, et al) Vampire bats (Wilkinson) • Blue Jays • (Stephens, • McLinn, • & Stevens)
Economic Sentience Defined as: “Awareness of the potential benefits of cooperation and trade with other intelligences” TIMEDISCOUNTING is its measure.
Acting ethically is an attractor in the state space of • intelligent goal-driven systems (if they interact with • other intelligent goal-driven systems on • a long-term ongoing basis) • Ethics *IS* the • necessary basis • for cooperation
We must find ethical design elements that are Evolutionarily Stable Strategies so that we can start AIs out in the attractor it's taken us millions of years to begin to descend.
Let's call such a design element an ESV: Evolutionarily Stable (or Economically Sentient) Virtue.
Economically Unviable • Destruction • Slavery • Short-term profit at the expense of the long term Avoiding all of these are ESVs
Fair enforcement of contracts is an ESV that demonstrably promotes cooperation.
Like auditing in current-day corporations, since money is their true emotion. Open Source motivations are anESV and other forms of guaranteeing trustworthiness
In particular, RECIPROCAL ALTRUISM is an ESV; Exactly like it's superset ENLIGHTENED SELF-INTEREST (AKA ETHICS)
A general desire for all ethical agents to live (and prosper) as long as possible is also an ESV, because it promotes a community with long-term stability and accountability.
Curiosity – a will to extend and improve one's world model – is an ESV. There is no good but knowledge, and no evil but ignorance. — Socrates
An AI with ESVs who knows what that means has a guideline for designing Version 2.0, even when the particulars of the new environment don't match the concepts of the old literal goal structure.