1 / 46

Machine Ethics: A Brief Tutorial

Machine Ethics: A Brief Tutorial. Jim Moor Philosophy Dartmouth College Hanover, NH 03755, USA james.moor@dartmouth.edu. Science is Value-Laded. Computer Science is no exception. Normativity in Science. Values and norms are an essential part of all productive sciences qua sciences.

watanabe
Download Presentation

Machine Ethics: A Brief Tutorial

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Machine Ethics: A Brief Tutorial Jim Moor Philosophy Dartmouth College Hanover, NH 03755, USA james.moor@dartmouth.edu

  2. Science is Value-Laded Computer Science is no exception

  3. Normativity in Science Values and norms are an essential part of all productive sciences qua sciences. They are used not only to establish the suitability of existing claims but also to select new goals to pursue. Scientific evidence and theories are often evaluated as either good or bad and scientific procedures as what ought or ought not be done

  4. Historical computer science illustration of such evaluation: Herbert Simon’s reply to Jacques Berleur November 20, 1999 “My reply to you last evening left my mind nagged by the question of why Trench Moore, in his thesis, placed so much emphasis on modal logics. The answer, which I thought might interest you, came to me when I awoke this morning. Viewed from a computing standpoint (i.e., discovery of proofs rather than verification), a standard logic is an indeterminate algorithm: it tells you what you MAY legally do, but not what you OUGHT to do to find a proof. Moore viewed his task as building a modal logic of “oughts” -- a strategy for search -- on top of the standard logic of verification.”

  5. Normativity in Science Moreover ethical norms often play a role in the evaluation of science done properly. This is particularly true as a science becomes more applied. Welcome to philosophy! Of course, computer science has had philosophical roots from the beginning. Consider Hobbes or Pascal or Leibniz

  6. Computer Ethics vs. Machine Ethics Roughly, Computer Ethics emphasizes the responsibility of computer users to be ethical, for example with regard to privacy, property, and power. Whereas Machine Ethics emphasizes building ethical abilities and sensitivities into computers themselves.

  7. Grades of Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  8. Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  9. Where is the machine decision-maker?

  10. BigBelly Compacts 300 gallons of garbage. In photo in NYC. Solar powered Wireless sensor transmits for pickup when nearly full http://www.wired.com/news/planet/0,2782,66993,00.html

  11. Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  12. Robot Camel Jockeys of Qatar “Camel jockey robots, about 2 feet high, with a right hand to bear the whip and a left hand to pull the reins. Thirty-five pounds of aluminum and plastic, a 400-MHz processor running Linux and communicating at 2.4 GHz; GPS-enabled, heart rate-monitoring (the camel's heart, that is) robots.” http://www.wired.com/wired/archive/13.11/camel.html?tw=wn_tophead_4

  13. Robot Camel Jockeys of Qatar “Every robot camel jockey bopping along on its improbable mount means one Sudanese boy freed from slavery and sent home.” http://www.wired.com/wired/archive/13.11/camel.html?tw=wn_tophead_4

  14. Ethical Impact Agents: How well might machines themselves handle basic ethical issues of privacy, property, power, etc.

  15. Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  16. Implicit Ethical Agents: Ethical considerations such as safety and reliability built into the machine. Examples: ATM Air Traffic Control Software Drug Interaction Software

  17. Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  18. Explicit Ethical Agents: Ethical Concepts Represented and Used Example: Carebots

  19. Isaac Asimov’s Three Laws of Robotics • (1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. • (2) A robot must obey the orders given it • by human beings except where such orders would conflict with the First Law. (3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  20. Just Consequentialism Core Values Consequences (foreseeable) Justice 1. Rights and Duties 2. Impartiality in Policy

  21. Core Values Life Knowledge Happiness Freedom Ability Opportunities Security Resources

  22. A Way to Remember Core Values Happy Life? Ability Security Knowledge Freedom Opportunities Resources

  23. Test Question for Judging Policies: Is this a policy that a fully informed, rational, impartial person would freely advocate as a public policy?

  24. A few observations about just consequentialism Bounded rationality Not maximizing Impartiality Dynamic and revisable policies Handles conflicts Procedure not algorithm Often not just one correct solution

  25. Could a machine use Just Consequentialism?

  26. Machine Ethics Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  27. Autonomous Ethical Agents: Make ethical decisions and actions (not merely decisions and actions that are ethical) on a dynamic basis as interact with environment. These agents are not conscious but are not merely responding with canned responses to situations. They may learn and adapt their behavior. Examples?

  28. Autonomous Ethical Agents: Make ethical decisions and actions (not merely decisions and actions that are ethical) on a dynamic basis as interact with environment. These agents are not conscious but are not merely responding with canned responses to situations. They may learn and adapt their behavior. What would an example be like?

  29. Disaster Relief Software Agent Suppose it receives information about who is in need, how badly they are injured, where supplies are, etc. Makes decisions with limited resources about who gets what. Triage sometimes required. Might it not run FEMA better than humans? More Ethically?

  30. More Ethical Military Machines?

  31. Driverless Cars

  32. Driverless Cars

  33. Driverless Truck

  34. The Trolley Problem

  35. Why is Machine Ethics Important? Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  36. Why is Machine Ethics Important? 1. Ethics is important 2. Machines will have increased control and autonomy 3. Opportunity to understand ethics better

  37. Can a Machine Ever Be A Full Ethical?

  38. Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright red line. Type 2: Machines can’t exist below it.

  39. Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  40. Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has no basis

  41. Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has a basis, but machines can’t do it

  42. Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has a basis, machines can do it, but should not.

  43. Joseph Weizenbaum continues... “Computers can make judicial decisions, computers can make psychiatric judgments. They can flip coins in much more sophisticated ways than can the most patient human being. The point is that they ought not to be given such tasks. They may even be able to arrive at “correct” decisions in some cases – but always and necessarily on bases no human being would be willing to accept.” Computer Power and Human Reason, p. 227

  44. Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Ethics has a basis, machines can approximate it, but machines above the bright line will lack a crucial ingredient for real ethical agents: Intentionality, Consciousness, Free Will, Responsibility,….

  45. Normative Computer Agents Ethical Impact Agents Implicit Ethical Agents Explicit Ethical Agents Autonomous Explicit Ethical Agents Full Ethical Agents

  46. Philosophical Objections to Machine Ethics Type 1: Ethics cannot exist above the bright line. Type 2: Machines can’t exist below it Open question?

More Related