150 likes | 423 Views
Announcements. Research Paper due today, November 20 Homework 8 due Thursday, November 29 Current Events Mike - now Presentations Tuesday (11/27) - Andrew, Chelsea, Kay, Luke Thursday (11/29) - Beth, Jeff, Joey, Kevin Tuesday (12/4) - Autumn, Christian, Mike
E N D
Announcements • Research Paper due today, November 20 • Homework 8 due Thursday, November 29 • Current Events • Mike - now • Presentations • Tuesday (11/27) - Andrew, Chelsea, Kay, Luke • Thursday (11/29) - Beth, Jeff, Joey, Kevin • Tuesday (12/4) - Autumn, Christian, Mike • Final Project due Thursday, December 6 CS 484 – Artificial Intelligence
Ethics in AI Lecture 16
According to the United Nations Economic Commission for Europe's World Robotics Survey, in 2002 the number of domestic and service robots more than tripled, nearly outstripping their industrial counterparts. ... So what exactly is being done to protect us from these mechanical menaces? 'Not enough,' says Blay Whitby, an artificial-intelligence expert at the University of Sussex in England. ... Robot safety is likely to surface in the civil courts as a matter of product liability. 'When the first robot carpet-sweeper sucks up a baby, who will be to blame?' asks John Hallam, a professor at the University of Southern Denmark in Odense. If a robot is autonomous and capable of learning, can its designer be held responsible for all its actions? Today the answer to these questions is generally 'yes'. But as robots grow in complexity it will become a lot less clear cut, he says." CS 484 – Artificial Intelligence
Intelligent Highway California is working on an “intelligent highway” system that would allow computer-controlled automobiles to travel faster and closer together on freeways than today’s human-controlled cars. What kinds of safety devices would have to be in such a system in order for you to feel comfortable using an intelligent highway? CS 484 – Artificial Intelligence
What is the best course of action? A start-up company has been developing an exciting new product for handheld computers that will revolutionize the way nurses keep track of their hospitalized patients. The device will save nurses a great deal of time doing routine paperwork, reduce their stress levels, and enable them to spend more time with their patients. Medick’s sales force has led hospital administrators to believe the product will be available new week as originally scheduled. Unfortunately, the package still contains quite a few bugs. All of the known bugs appear to be minor, but some of the planned tests have not yet been performed. Because of the fierce competition in the medical software industry, it is critical that this company be first to market. It appears a well established company will release a similar product in a few weeks. If its product appears first, Medick will probably go out of business. CS 484 – Artificial Intelligence
Ethical Theories • Allows proponents to • examine moral problems • reach conclusions • defend conclusions • Examples CS 484 – Artificial Intelligence
Relativism No universal moral norms of right and wrong Different groups with opposite views can both be right Objectivism Morality has an existence outside human mind Ethical decision-making is a rational process People can discover objective moral principles with the use of logical reasoning Two perspectives CS 484 – Artificial Intelligence
Kantianism • Categorical Imperative • Act only from moral rules that can at the same time be universal moral laws • Act so that you always treat both yourself and other people as ends in themselves, and never only as means to an end • Focus on what ought to do - “dutifulness” • Do what is always good without qualification • Good will is the only thing that is universally good CS 484 – Artificial Intelligence
Act Utilitarianism • An action is right (or wrong) to the extent that it increases (or decreases) the total happiness of affected parties • Happiness = advantage, benefit, good, or pleasure • Focus on consequence of actions • No such thing as good or bad motives CS 484 – Artificial Intelligence
Rule Utilitarianism • One ought to adopt those moral rules which, if followed by everyone, will lead to the greatest increase in total happiness • Apply Principle of Utility to rules rather than actions • Rules should be followed without exception CS 484 – Artificial Intelligence
Social Contract Theory • Morality consists in the set of rules, governing how people are to treat one another, that rational people will agree to accept, for their mutual benefit, on the condition that others follow those rules as well • By living in a civil society, a person’s actions have a moral quality • Close correspondence to rights and duties • Right to life: others have duty not to kill you CS 484 – Artificial Intelligence
Technology is heading here [the singularity]. It will predictably get to the point of making artificial intelligence. The mere fact that you cannot predict exactly when it will happen down to the day is no excuse for closing your eyes and refusing to think about it. Eliezer Yudkowsky "The Singularity Summit: AI and the Future of Humanity” September 8, 2007 CS 484 – Artificial Intelligence
Question • If you could have a robot that would do any task you like, a companion to do all the work that you prefer not to, would you? And if so, how do you think this might affect you as a person? • Responses CS 484 – Artificial Intelligence
Question • Are there any kind of robots that shouldn't be created? Or that you wouldn't want to see created? Why? • Responses CS 484 – Artificial Intelligence
Ethical Robots • Utilitarian Machine • Perform computation on possible actions and choose the action that maximizes the greatest good • Kantian Machine • Map action plans into categories--forbidden, permissible, obligatory--by a simple consistency test on the plans • For every maxim m, a machine could tell whether it is an element of F (forbidden maxims) • What kind of machine should we create? CS 484 – Artificial Intelligence