520 likes | 591 Views
Explore how robots deceive with human-like features, from historical automatons like Vaucanson's duck to modern creations like Asimo. Learn about human-robot interactions, android science, and the magic of robotics. Dive into the intricate links between AI, magic, and deception.
E N D
Lecture 9: AI, magic and deception • Adaptive Robotics 2008
Assignment feedback • Read the question! • Try and answer the question – be explicit about the link between what you cover and how it relates to the question • Structure your argument – e.g. with introduction and conclusion • Start by saying what you are going to say • Finish by reflecting back on what you have said.
Research – Wikipedia versus published sources • Include material from the course – show that you know and have understood it.
Mark range 45-78% • Abstract – summarises whole essay, or journal paper.
Abstract e.g. “There have been many different approaches to robotics, two of which include the more recent behaviour-based robotics and good old fashioned AI. The characteristics of each approach vary and they both have advantages and disadvantages depending on the overall purpose of the robot. These characteristics of these two approaches will be explored and contrasted”
“Robots in the news” • Robot play at Osaka University • “Hataraku Watashi” (I, worker) • Robots speak lines, and share stage with humans • About a housekeeping robot that loses its motivation to work. • Wakamaru robot (Mitsubishi)
Lecture 9: AI, magic and deception • Human-Robot interaction • Attempts to make humanoid robots, or convincing robot pets • “Android science” (Karl MacDorman) • Robots creating the illusion of life and animacy • Factors to exploit: • Interest in technology • Human tendency to anthropomorphism • Human tendency to zoomorphism • “Darwinian buttons” • See early examples … up to recent examples. • See some experiments on HRI understanding what affects our interactions. • Should we do this? Class discussion …..
Deception and AI • ELIZA – creating the illusion of understanding • Automata – creating the illusion of life • “Android Science” – creating robots with human appearance.
Vaucanson’s duck • Created 1739 by Jacques Vaucanson • Appeared to eat kernels of grain, digest and defecate • (but pellets inserted into duck’s anus)
Chess playing automaton: The Turk • Constructed in 1769 by Baron von Kemplen for Austrian-Hungarian empress Maria Theresa • Played a strong game of chess against many human opponents – including Benjamin Franklin and Napoleon over 80 year period
Gakutensoku(learning from the laws of nature) Built in 1929 by Makoto Nishimura for celebrations of ascension of Emperor Hirohito to his throne. Could smile, move eyes, cheeks and chest, and move a pen. Worked by forcing compressed air through hidden rubber tubes Seated behind desk People would remove their hats and pray to it.
Westinghouse robots • 1927 Roy James Wensley and Televox • New mechanism for controlling electrical substations • Previously – controller would phone worker in substation and tell them which switch to open. Worker would open switch and report back. • New idea – replacing worker with bank of relay switches that could be operated by calling them on the phone • 3 tones from tuning forks directed to phone • At receiving end, tones amplified to operate relay.
“Televox” • Wensley’s machine consisted of 2 boxes of electronics • Westinghouse publicity team – branded it a “mechanical man” • Wensley added head, body and limbs made from prop board • Story spread rapidly • “The club woman with Televox in her home may call up at five o’clock, inquire of Televox what the temperature in the living room is, have Televox turn up the furnace, light the oven in which she has left the roast, light the lamp in the living room, and do whatever else she may wish. Televox comes near to being a scientist’s realization of a dramatist’s fantasy.” (1928)
“American engineer H. J. Wensley of Westinghouse laboratories just created a robot which he named “Televox” because it follows directions remotely from voice commands or sounds of a musical instrument. The vibrations trigger an electric motor in the robot which makes it act according to the commands received.… This bewildering being is the most striking design of our mechanical time, whose creations, having neither sense nor brain, achieve a perfection that truly appears to approach the supernatural.” • Vu magazine, 1928
Other Westinghouse robots • Katrina Televox • Willie Vocalite – smoked cigarettes • Controlled by instructions spoken into telephone – different responses triggered by number of syllables • Elektro A 7 ft walking robot, remote controlled by voice commands • Sparko • A dog for Elektro
In these examples, no attempt to model humans, or animals • No attempt to make the mechanisms underlying their behaviour the same as those of humans, or animals • Aim instead is to create an illusion • Of life • Of understanding • Also can serve to advertise a company • E.g. Westinghouse robots • E.g. Asimo and Honda
Factors making AI magic and deception easier • Humans have a natural tendency to anthropomorphise machines • E.g. talking to your car, or your computer as if it could understand you, and as if it could choose to behave well or not • E.g. seeing faces in inanimate objects
Anthropomorphism – attributing human characteristics to non-human creatures • Zoomorphism – attributing animal characteristics to non-animals
Factors making AI magic and deception easier • “willing suspension of disbelief” • Exploited by puppeteers • (see Heart robot) • See children with their favourite teddy bear
Heart Robot • Developed at University of West of England • Designed to encourage emotional responses • Part robot, part puppet – operated by expert puppeteer • Robot appears to respond emotionally to human encounters • - when hugged and treated gently its limbs become limp, eyelids lower, breathing relaxes and heart beat slows down. • If shaken or shouted at, it flinches, clenches its hand, breathing and heart rate speed up and its eyes widen in dismay.
Factors making AI magic and deception easier Sherry Turkle (2006) talks of how robots, or toys, that seem to need nurturing and care “push our Darwinian buttons”
Paro and My Real Baby • Therapeutic seal and Interactive doll • Turkle et al (2006) studied elderly care-home resident’s interactions • Method: observation and conversations with technology users
Turkle (1995) notes a tendency among both children and adults to treat computer artifacts that are minimally responsive as more intelligent than they really are: “Very small amounts of interactivity cause us to project our own complexity onto the undeserving object” e.g. Tamagotchi phenomena.
Kismet • Cynthia Breazeal, MIT • Sherry Turkle (2006) looked at children interacting with Kismet and Cog – • Found they preferred to see Kismet as something with which they could have a relationship. • They would develop elaborate explanations for Kismet’s failures to understand, or to respond appropriately. • E.g “Kismet is too shy” “Kismet is not feeling well”
Human-Robot interaction • What creates an illusion of intelligence? • What kind of robot do people prefer to interact with? • Humanoid? Furry? Friendly? • Eye contact, turn taking
Uncanny valley • Japanese roboticist Masahiro Mori wrote about the uncanny valley in 1970. • Mori's hypothesis states that as a robot is made more humanlike in its appearance and motion, the emotional response from a human being to the robot will become increasingly positive and empathic, until a point is reached beyond which the response quickly becomes that of strong repulsion.
Total Turing Test? • Ishiguro, 2006 • Factors affecting our acceptance of robots as social partners • Android in booth viewed for 1 or 2 seconds • Static android • Moving android (micro movements) • Real human • Task – check colour of cloth • 80% aware of android in static condition • 76.9% unaware of android in moving condition
Factors encouraging human-robot interaction • Appearance • Movement • Emotional expression
Factors encouraging human-robot interaction • Conversation – e.g. turn taking, nodding encouragingly • Eye contact • Contingency – responding quickly enough • Sometimes Wizard of Oz approach used • Recognizing and responding to your emotion • Face? Voice? Body language?
Rubi/Qrio project • UCSD (University of California, San Diego) • Studied interactions between children and QRIO robot in day care centre over 5 months
Measured the interactions between toddlers and robots – they interacted with QRIO more than with toy robot, or teddy bear. • The robot responded contingently – e.g. giggling when patted on the head • Part Wizard of Oz (hidden operator) • Their interactions decreased in a middle period where the robot performed a preset, but elaborate dance.
“Results indicate that current robot technology is surprisingly close to achieving autonomous bonding and socialization with human toddlers for sustained periods of time. “ (Tanaka et al, 2007) • But the toddlers’ interactions were supervised and guided by other adults • Also interaction times limited (1 hour sessions) • And some remote control
Social implications? • - making robots appear intelligent? • - making them seem to care “I love you” • - using them as companions
Hello Kitty robot Website claims: "This is a perfect robot for whoever does not have a lot time to stay with their child. Hello Kitty Robot can help you to stay with your child to keep them from being lonely."
Robot companions and carers for the elderly, or the very young- are they a good thing? • Discussion in pairs, then 4s etc, then report back advantages and disadvantages.
The March of the Robot DogsSparrow (2002) • Robot “pets” suggested as companions for the elderly • There are some demonstrable benefits • But “For an individual to benefit significantly from ownership of a robot pet they must systematically delude themselves regarding the real nature of their relation with the animal.” (Sparrow, 2002)
Living animal pets can share experiences with us. • Sparrow argues that it’s right to value our relationships with them • But a robot is not something we can have a relationship with. • To think otherwise is to be deluded. • Morally wrong to delude old people into thinking they can have a relationship with a robot pet • Also old people need human contact – the more robots are used in their care, the less human contact they will receive.