460 likes | 469 Views
Machine Ethics: A Brief Tutorial. Jim Moor Philosophy Dartmouth College Hanover, NH 03755, USA james.moor@dartmouth.edu. Science is Value-Laded. Computer Science is no exception. Normativity in Science. Values and norms are an essential part of all productive sciences qua sciences.
E N D
Machine Ethics: A Brief Tutorial Jim Moor Philosophy Dartmouth College Hanover, NH 03755, USA james.moor@dartmouth.edu
Science is Value-Laded Computer Science is no exception
Normativity in Science Values and norms are an essential part of all productive sciences qua sciences. They are used not only to establish the suitability of existing claims but also to select new goals to pursue. Scientific evidence and theories are often evaluated as either good or bad and scientific procedures as what ought or ought not be done
Historical computer science illustration of such evaluation: Herbert Simon’s reply to Jacques Berleur November 20, 1999 “My reply to you last evening left my mind nagged by the question of why Trench Moore, in his thesis, placed so much emphasis on modal logics. The answer, which I thought might interest you, came to me when I awoke this morning. Viewed from a computing standpoint (i.e., discovery of proofs rather than verification), a standard logic is an indeterminate algorithm: it tells you what you MAY legally do, but not what you OUGHT to do to find a proof. Moore viewed his task as building a modal logic of “oughts” -- a strategy for search -- on top of the standard logic of verification.”
Normativity in Science Moreover ethical norms often play a role in the evaluation of science done properly. This is particularly true as a science becomes more applied. Welcome to philosophy! Of course, computer science has had philosophical roots from the beginning. Consider Hobbes or Pascal or Leibniz
Computer Ethics vs. Machine Ethics Roughly, Computer Ethics emphasizes the responsibility of computer users to be ethical, for example with regard to privacy, property, and power. Whereas Machine Ethics emphasizes building ethical abilities and sensitivities into computers themselves.
Grades of Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
BigBelly Compacts 300 gallons of garbage. In photo in NYC. Solar powered Wireless sensor transmits for pickup when nearly full http://www.wired.com/news/planet/0,2782,66993,00.html
Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Robot Camel Jockeys of Qatar “Camel jockey robots, about 2 feet high, with a right hand to bear the whip and a left hand to pull the reins. Thirty-five pounds of aluminum and plastic, a 400-MHz processor running Linux and communicating at 2.4 GHz; GPS-enabled, heart rate-monitoring (the camel's heart, that is) robots.” http://www.wired.com/wired/archive/13.11/camel.html?tw=wn_tophead_4
Robot Camel Jockeys of Qatar “Every robot camel jockey bopping along on its improbable mount means one Sudanese boy freed from slavery and sent home.” http://www.wired.com/wired/archive/13.11/camel.html?tw=wn_tophead_4
Ethical Impact Agents: How well might machines themselves handle basic ethical issues of privacy, property, power, etc.
Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Implicit Ethical Agents: Ethical considerations such as safety and reliability built into the machine. Examples: ATM Air Traffic Control Software Drug Interaction Software
Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Explicit Ethical Agents: Ethical Concepts Represented and Used Example: Carebots
Isaac Asimov’s Three Laws of Robotics • (1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. • (2) A robot must obey the orders given it • by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Just Consequentialism Core Values Consequences (foreseeable) Justice 1. Rights and Duties 2. Impartiality in Policy
Core Values Life Knowledge Happiness Freedom Ability Opportunities Security Resources
A Way to Remember Core Values Happy Life? Ability Security Knowledge Freedom Opportunities Resources
Test Question for Judging Policies: Is this a policy that a fully informed, rational, impartial person would freely advocate as a public policy?
A few observations about just consequentialism Bounded rationality Not maximizing Impartiality Dynamic and revisable policies Handles conflicts Procedure not algorithm Often not just one correct solution
Could a machine use Just Consequentialism?
Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Autonomous Ethical Agents: Make ethical decisions and actions (not merely decisions and actions that are ethical) on a dynamic basis as interact with environment. These agents are not conscious but are not merely responding with canned responses to situations. They may learn and adapt their behavior. Examples?
Autonomous Ethical Agents: Make ethical decisions and actions (not merely decisions and actions that are ethical) on a dynamic basis as interact with environment. These agents are not conscious but are not merely responding with canned responses to situations. They may learn and adapt their behavior. What would an example be like?
Disaster Relief Software Agent Suppose it receives information about who is in need, how badly they are injured, where supplies are, etc. Makes decisions with limited resources about who gets what. Triage sometimes required. Might it not run FEMA better than humans? More Ethically?
Why is Machine Ethics Important? Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Why is Machine Ethics Important? 1. Ethics is important 2. Machines will have increased control and autonomy 3. Opportunity to understand ethics better
Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright red line. Type 2: Machines can’t exist below it.
Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has no basis
Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has a basis, but machines can’t do it
Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has a basis, machines can do it, but should not.
Joseph Weizenbaum continues... “Computers can make judicial decisions, computers can make psychiatric judgments. They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not to be given such tasks. They may even be able to arrive at “correct” decisions in some cases – but always and necessarily on bases no human being would be willing to accept.” Computer Power and Human Reason, p. 227
Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has a basis, machines can approximate it, but machines above the bright line will lack a crucial ingredient for real ethical agents: Intentionality, Consciousness, Free Will, Responsibility,….
Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents
Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Type 2: Machines can’t exist below it Open question?