370 likes | 601 Views
Humanoids. References: Ch. 56, Springer Handbook of Robotics, Charles C. Kemp, Paul Fitzpatrick, Hirohisa Hirukawa , Kazuhito Yokoi, Kensuke Harada, Yoshio Matsumoto Rob’s Robot: Current and Future Challenges for Humanoid Robots by B. Duran and S. Thrill
E N D
Humanoids References: Ch. 56, Springer Handbook of Robotics, Charles C. Kemp, Paul Fitzpatrick, Hirohisa Hirukawa, Kazuhito Yokoi, Kensuke Harada, Yoshio Matsumoto Rob’s Robot: Current and Future Challenges for Humanoid Robots by B. Duran and S. Thrill Humanoids Survey from WTEC World Technology Evaluation Center International Study of Robotics Research, 2004 (Ambrose, Zheng, Wilcox) ICub robot slide presentation
Why Humanoids? • We are moving away from fixed industrial robots with a single purpose/task (e.g. factory automation) • New generation of adaptive robots that can operate in a variety of scenarios • Robots are showing increasing levels of cognition and intelligence • Research on Humans (mind, body, behavior, learning) is being ported to robots • This is all coming together to build new Humanoids
Why Humanoids? • The Human Example: Versatile, adaptable. • What to borrow from humans? Always necessary to replicate human? e.g. dishwasher • The Pleasing Mirror: Humans obsessed with humans (People Magazine!). • Humans are social animals –interact with each other in many different modes of communication. Humanoids will also have to learn this – gesturing, face expressions, eye gaze etc. • Understanding Intelligence: mind/body relationships, development issues • Interacting with the Human World: tools, doors, desks are human sized. • Legged mobility allows access to many small areas easily, use stairs etc. • Allows for human-robot collaborations • Interacting with People: emotions, grounding, learning tasks from humans • Entertainment, Culture, and Surrogates: Robot can buy our clothes, test ergonomics, or stand-ins (avatars). • Prosthetics: parts of humanoids may be replicated for prosthetics
Applications • Military & Security Search and rescue, mine/improvised explosive device (IED) handling, logistics, and direct weapons use • Medical Search and rescue, patient transfer, nursing, elder care, friendship • Home Service Cleaning, food preparation, shopping, inventory, home security • Space Working safely with space-walking astronauts, caretakers between crews • Dangerous Jobs Operating construction equipment, handling cargo, firefighting, security • Manufacturing Small parts assembly, inventory control, delivery, customer support
History of Humanoids • Ancient Times: Leonardo da Vinci, Mechanical Dolls, Disney Animatronics • WABOT, Waseda University • Honda ASIMO • MIT COG
Humanoids • Different Forms: arms, hands, legs, wheels, heads • Different DOF’s: • Varies widely by design • Large #of DOFs, no closed form solution for Inverse Kinematics. Redundant. • Different Sensor packages: vision, range, speech, audition, touch, force etc. • Different physical sizes, actuation methods, and degree of anthropomorphism
Current State of the Art • Gakutensoku? • Honda Asimo: 34 DOF, 54kg, 130cm height • Mobility and locomotion: walk, run, stairs • Sensing: object detection, tracking, localization, obstacle avoidance • Hubo-2: 40 DOF, 45kg, 130cm height • Can walk, run, perfrom sensor-motor tasks. • ICub: 53 DOF, 22kg, 104 cm height • Open source hardware and software, easy to replicate • Robonaut 2: 42 DOF, 150kg, 100cm height (waist to head) • Willow Garage PR2
Locomotion and Walking • Bipedal locomotion • ZMP (Zero Moment Point) Based Control • Falling Down, Getting Up: gyros, accelerometers, force feedback needed • Localization and Obstacle Detection: need to perceive the environment • Real-time constraints
Manipulation • Hands are least developed area of humanoids • Dexterous manipulation is a difficult problem • Bi-manual manipulation is not well understood • Low payload weights, restricted DOFs for grasping are the rule – parallel jaw grippers • Vision, force, tactile all need to be fused for intelligent manipulation
Sensing for Manipulation • Model-Based computer vision • Feature-Based computer vision • Active perception • Force, Torque and Tactile Sensing • Compliance
Cooperative Manipulation • Interacting with Humans • Shared control: force and position • Needed to move large objects cooperatively
Learning and Development • Learning by Demonstration (LBD) is emerging paradigm • Futile to try to model the world exactly • Humanoids can learn about their environment, and also about their actions
Whole Body Motion (WBM) Activities • Lifting and carrying large heavy items • 4 ways to generate “coarse” WBM • Motion capture system • GUI offline (graphical simulator) • Teleoperation interface • Automatic motion planning • Need to incorporate dynamic constraints – can be done as a post-processing step
Generating Motion when in Contact • ZMP and other balancing algorithms no longer apply • Picking up objects, pushing them, holding a support surface with the hands changes the game!
Interacting with Humans • Expressive morphology and behavior • Interpreting human expressions • Speech Recognition • Visual Contact • Gesturing, pointing
Communication • Expressive Morphology and Behavior • Heads are important! • Orient directional sensors needed for perception • Strike expressive poses • Locus of attention
Current Challenges • Design, Packaging, Power • Bipedal Walking • Wheeled Lower Bodies • Dexterous Limbs • Mobile Manipulation • Human-Robot Interaction
Key technologies • Improved design and packaging of systems with new component technologies that are smaller, stronger, faster, and offer better resolution and accuracy • Dense and powerful energy storage for longer endurance, heavy lifting, and speed • Improved actuators that have higher power densities, including auxiliary subsystems such as power supplies, signal conditioning, drive trains, and cabling • Improved speed reduction and mechanisms for transferring power to the humanoid’s extremities • Improved force control for whole body dynamics • Better tactile skins for sensing contact, touch and proximity to objects in the environment • Advanced navigation that perceives and selects footfalls with 1 cm scale accuracy at high body speed • Vestibular systems for coordinating upper limbs and head-mounted sensors on dynamic bodies • Dexterous feet for dynamic running and jumping • Dexterous hands for tool use and handling of general objects
Fundamental Research Challenges • What are the best leg, spine and upper limb arrangements, in both mechanisms and sensors, to enable energy-efficient walking? • What are the algorithms for using upper body momentum management in driving lower body legs and wheeled balancers? • How should robots represent knowledge about objects perceived, avoided and handled in the environment? • How can a mobile manipulation robot place its body to facilitate inspection and manipulation in a complex workspace, where a small footprint and high reach requirements collide? • How should vision/laser based perception be combined with tactile/haptic perception to grasp objects? • What roles do motion and appearance have in making people accept and work with robots? • How can people interact with humanoids to form effective and safe teams?
Engineering Challenges • Vision: unlimited stream of data changing in time, space, color, intensity etc. • Computationally demanding (cycles and power) • Different formats: Cartesian, Log-polar, hemispheric, range • FOV, night/day sensing
Tactile Sensing • Tactile Sensors currently used on hands, feet • Need; skin-like sensors, all over the body • Need both proprioceptive and exteroceptive sensing • Technologies: capacitance, resistance, optical, utrasonic, magnetic, piezoresistive • Needed: Robustness, spatial resolution, dynamic range • Question: how anthropomorphic? Humans have 4 different kind of tactile sensor cells: slowor fast response, shear or normal forces
Human Tactile Mechanoreceptors A. In the human hand the submodalities of touch are sensed by four types of mechanoreceptors. Specific tactile sensations occur when distinct types of receptors are activated. Firing of all four receptors produces the sensation of contact with an object. Selective activation of Merkel cells and Ruffini endings produces sensations of steady pressure on the skin above the receptor. When the same patterns of firing occur only in Meissner's and Pacinian corpuscles, the tingling sensation of vibration is perceived. B. Location and other spatial properties of a stimulus are encoded by the spatial distribution of the population of activated receptors. Each receptor fires action potentials only when the skin close to its sensory terminals is touched, ie, when a stimulus impinges on the receptor's receptive field . The receptive fields of mechanoreceptors—shown as red areas on the finger tip—differ in size and response to touch. Merkel cells and Meissner's corpuscles provide the most precise localization of touch, as they have the smallest receptive fields and are also more sensitive to pressure applied by a small probe. Ruffini'send organs detect tension deep in the skin. Meissner's corpuscles detect changes in texture (vibrations around 50 Hz) and adapt rapidly. Pacinian corpuscles detect rapid vibrations (about 200–300 Hz). Merkel's discs detect sustained touch and pressure. Hair follicle receptors are located in hair follicles and sense position changes of hairs.
Sound • Localization by audio • Humans able to filter out noise: cocktail party effect! • Merging audition with vision is needed • Multisensor fusion: Humans can do it seamlessly, can robots? • Ruesch, 2008
Odor and Taste? • Seems bizarre, robots don’t eat, smell… • But to interact with humans, need to replicate human sensing • Dialogue; “Do you smell that?”; “What flavor is that?”. • Need to motivate common grounding in language between human and robot
Actuators • Traditional geared motors have limitations, but easy to control • Non-linear elastic elements (like muscle) may be better – but more difficult to control • Pneumatic and hydraulic actuation • Issues: energy conservation, power-to –weight, payload etc.
Whole Body Motion • Need to create variety of behaviors: walking, running, crawling, skipping etc. • Unclear how to transition between them • Uneven terrain (e.g. outdoors) makes for more difficulty and some assumptions (flat support surface, sufficient friction form slipping) are not true • Integrating sensing (e.g. vision) with locomotion in real-time
Cognitive Architectures: Symbol Based vs. Embedded Approach (Emergent Systems) • Symbols manipulated through set of rules • Traditional AI approach • Scope is entirely dependent on objects and rules provided by programmer • Can use probabilistic reasoning to choose rules • Humanoid walks (rules provided by programmer) but “trips” and falls over – no rule base for this event • Difficult to use in unconstrained environments
Embodied Cognition • Premise: Body shapes the mind • Research suggests body activities affect reasoning • People rate cartoons as funnier if forced to smile while viewing • Body and mind are intertwined • What this means: Reasoning for a humanoid cannot be separated from the hardware embodiment of the robot. Not true in symbol based systems