110 likes | 120 Views
This chapter delves into the applicability of morality and ethics to robots, addressing common concerns and debunking opposing arguments. It explores concepts such as GOFAI, Cartesian dualism, property dualism, determinism, and the intelligibility problem.
E N D
EECS 690 April 8 Notes
Purpose of Chapter 4 • The authors mean to address the concern that many might have that the concepts of morality and ethics just simply cannot be made to apply to (ro)bots. • They hint at some arguments made by opponents and hint at some answers. These arguments and answers can be made more explicit.
GOFAI • GOFAI, pronounced ‘Goofy’, is an acronym for Good Old-Fashioned AI used by its detractors. Searle, though a detractor, uses the term ‘Strong AI’. • Pages 57-58 allude to Searle’s objections, but for more detail see the extra reading.
Cartesian Dualism • Descartes famously argued for a position that has since been dubbed ‘substance dualism’. This position argues that minds are composed of a different fundamental substance than bodies, amenable to entirely different means of investigation and understanding. • If Descartes is right, then an empirical science of the mind is impossible. However, due primarily to its inability to satisfactorily explain mind/body interactions, substance dualism is as thoroughly abandoned as any idea in philosophy. Despite this abandonment, many positions in the philosophy of mind turn out to be, on examination, clever restatements of substance dualism.
Property Dualism • Despite the similarity in name to the Cartesian position, property dualism is a horse of a different color. • Property dualists accept physicalism (the idea that a science of mind needs posit only ordinary matter) but maintain that adequate explanation of mental events does not bottom out in descriptions of physical states.
“A Special Property” • Some property dualists might use their position to argue that there is some special property that emerges from human brains that software or digital hardware systems cannot duplicate. There is nothing in the general idea of property dualism that requires one to take this further position, though some do (notably Searle).
Determinism and Ethics • Here is the intuitive idea that Wallach and Allen refer to in this section: • There are several things necessary for ethical behavior, and causal determinism prohibits these things from obtaining: • A moral agent must be able to have chosen otherwise than she did. • The moral agent must be ultimately responsible for her decisions
Determinism and (Ro)bots • Whether the previous is true, the fact that (ro)bots are deterministic systems is uncontroversial. Even systems that incorporate pseudo-randomness are in effect deterministic. This is an assumption that we make when designing and building them. Indeed the very idea of designing a machine requires determinism. • So to maintain that (ro)bots could have the terms ‘moral’ or ‘ethical’ applied to them, we may (1) deny that morality requires what is specified on the previous slide, (2) deny that determinism makes impossible what is specified on the previous slide, or (3) deny that the alternative to determinism is coherent.
(1) These are not necessary for Morality: A moral agent must be able to have chosen otherwise than she did. (AP) The moral agent must be ultimately responsible for her decisions (UR) - For AP: Consider Martin Luther, “Here I stand, I can do no other”. Whatever he is doing here, he is not trying to duck responsibility. Also consider what has come to be called a “Frankfurt style” example. John sits in a (unbeknownst to John) locked room, and decides to stay in the room rather than leave it. It seems it is still a morally relevant decision, even if no AP exists. - For UR: Consider Biff, who learns of morality by reading Mill’s “Utilitarianism” and lives strictly by that teaching. Has this made Biff necessarily amoral?
(2) UR and AP are not prohibited by determinism (properly understood) • AP: In order for people to anticipate consequences at all, a large amount of determinism is required, and the ability of persons to navigate in the causal chain of events is what gives us what we call ‘choice’. • UR: What it means to be responsible for a choice is to be situated in a causally appropriate way to some observed effect, so UR requires at least a large amount of determinism.
The Intelligibility Problem • It is a common intuition that determinism is incompatible with responsibility, but does indeterminism do any better? • Consider AP: In an indeterministic system, things could have been other than they are, but ipso facto are out of anyone’s control. • Consider UR: In an indeterministic system one thing follows another for no determinate reason at all. Is doing something for no reason at all what we call having responsibility? • Even in systems that posit very isolated instances of indeterminism, the problem is a microcosm of the above two concerns.