1 / 17

Who Is A Combatant?

Who Is A Combatant?.

iolani
Download Presentation

Who Is A Combatant?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Who Is A Combatant? The Predator UAV pilots in Las Vegas, half a world away from the robots they operate, would seem to be classified as combatants and therefore lawful targets of attack even when they are at home with family, after their working shifts have ended and they have left the military base—does this align with our intuitions, i.e., does the ‘part-time’ character of this job challenge existing rules on who may be targeted and when? This impacts society, in that international law may allow for enemies to attack the US in certain locations such as Las Vegas—bringing the ‘War on Terrorism’ back to American soil. The use of military robots by the US is seen as cowardly and dishonorable by enemies in the Middle East, and this is expected to fuel resentment, anger, and hatred—and therefore recruitment efforts—by those enemies (Singer 2009; Shachtman 2009). This too appears to be counterproductive to the larger goal of lasting peace. Would enhanced warfighters be viewed the same way, and what effect would this have on mission success?

  2. Some Issues • Raise or lower the barrier for violence? • More or less humane than humans? • Complexity, unpredictability • Legal, criminal responsibility • Better or worse discrimination? • Moral agent? • Human/robot team? Effect on squad cohesion • Serious malfunction. Robots gone wild • Possibility of capture, hacking, used against us

  3. Lower Barrier For War? (Lin, et al)

  4. Lower Barrier For War?

  5. Opposing Views(Maine, et al)

  6. Surgeon General’s Report 2006 Prof Ron Arkin (Georgia Tech) argues that the US should pursue autonomous lethal robots because they promise to lower the incidence of noncombatant suffering and death

  7. Can Robots Perform Better? • Do not need to have self preservation as a foremost drive. Can be self sacrificing • Can handle broader range of sensory input than human • Avoid human problem of scenario fulfillment. No premature cognitive closure • Able to independently and dispassionately report actions of humans and robots on battlefield • Designed without emotions that cloud judgment • Don’t get tired

  8. ?

  9. Advanced Software • Sensor Fusion • Attack Decisions • Human Supervision • Will human be able to override? • How much time will human have to decide? • Can a robot disobey an order? • Can program operational morality , e.g. ROE (but insufficient) • Need functional morality • Increased autonomy • Prospect of unanticipated influences • Complex environment • Possible use in unplanned ways • Unable to in advance predict behavior in complex circumstances • Must have capacity to assess and respond to moral considerations

  10. Robots Run Amok • Very complex • Adaptive, emergent behavior • Millions of lines of code • .0.001% error rate X 10 million lines = 100 • Tragedies have occurred • South Africa, 2007: Semi-autonomous cannon malfunctions killing 9 and wounding 14 friendly soldiers • Iraq, 2008; TALON SWORD units trained their guns on US soldiers. Outcome of investigation not reported.

  11. Test of Moral Judgment and Responsibility • First, moral agents so conceived are justifiably and uncontroversially held responsible for that in their actions which is intentional • Secondly, they may be justifiably held responsible for incidental aspects of those actions of which they should have been aware • Thirdly, they may be justifiably held responsible for at least some of the reasonably predictable effects of their actions

  12. Can A Robot Be Responsible?

  13. (2)Can A Robot Be Responsible?

  14. Thoughts On Moral Agency

  15. Programming Morality(Top Down or Bottoms-Up Approaches) The challenge of building artificial moral agents (AMA) might be understood as finding ways to implement abstract moral values within the control architecture of intelligent systems • Top Down – takes a specified ethical theory and analyzes its computational requirements to guide the design of algorithms and subsystems capable of implementing that theory (must program infinitely many what-ifs) • Bottoms- Up – Emphasis is placed on creating an environment where an agent explores courses of action and is rewarded for behavior that is morally praiseworthy (is accelerated learning possible?)

  16. Supra-Rational Faculties • Supra-rational faculties - such as emotions, being embodied in the world, social skills, and consciousness – represent ways that humans get access to essential information that must be factored into ethically significant choices. Sensory experience and social mechanisms contribute to behavior • Given that morally intelligent behavior may require much more than being rational, the challenge of building artifcial moral agents is, from this perspective, a problem of moral psychology, not moral calculation

More Related