240 likes | 264 Views
Robot Intelligence. Kevin Warwick. Reactive Architectures I: Subsumption. Perhaps the best known reactive architecture, developed in the ‘80s by Rodney Brooks Each behaviour is defined in a layer takes sensory input produces required robot motor output
E N D
Robot Intelligence Kevin Warwick
Reactive Architectures I: Subsumption • Perhaps the best known reactive architecture, developed in the ‘80s by Rodney Brooks • Each behaviour is defined in a layer • takes sensory input • produces required robot motor output • each layer has a defined level of competence and hence an associated priority • Examples • collision avoidance (low competence, high priority) • path follow (higher competence, lower priority) • wander aimlessly • build map (high competence, low priority) • look for changes
Reactive Architecture I:Subsumption • Each layer is integrated into a subsumption architecture whereby • for each orthogonal mode there is only one actual output • position of robot is orthogonal to (independent of), say, position of a pan/tilt camera platform • A lower competence can always subsume (or suppress) the output from a higher competence • “Default behaviour” is always the lowest competence one
Reactive Architectures II: Motor Schema • Individual motor behaviours are defined based on sensory input (much the same as subsumption) • Output of each schema is a velocity vector representing direction and speed • Difference with subsumption is that a number of schema may be active at any one time • The emergent behaviour is a combination of groups of motor schema • Thus there is an element of co-operation between motor schemas • Disadvantages of motor schema? • How can groups of motor schema be combined, other than by the designer? • How can changes in the active group be effected? • Arbitration of Behaviours can be carried out through sequencing.
Reactive Architectures III: Ego Behaviour • Both subsumption and motor schema architectures rely on each behaviour operating without any feedback on the emergent behaviour of the system • Each behaviour is also fixed in terms of the input-to-output mapping • An alternative approach employs a strategy for changing the way a behaviour contributes to the emergent behaviour based on • knowledge of the emergent behaviour (feedback) • self-awareness of the behaviour itself • This is effected by giving each behaviour an Ego
Ego Behaviour • The Ego itself is here defined through a simple variable gain PD controller where the gains are updated using fuzzy logic to either • strengthen the contribution of the behaviour or • withdraw the behaviour from contributing
Ego-Behaviour Experiments: 1 • Two behaviours are present • Cs is a strong ego-behaviour and wants to get to –1.5 • Cw is a weak ego-behaviour and wants to get to +1 • After 1 second Cw realises that it is not able to compete and withdraws
Ego-Behaviour Experiments:2 • Three behaviours are present • Cs is a strong ego-behaviour and wants to get to –1.5 • Cm1 is a medium ego-behaviour and wants to get to +1 • Cm2 is a medium ego-behaviour and wants to get to +2
Ego-Behaviour Experiments:2 - continued • Three behaviours are present • After 0.6 seconds the stronger ego-behaviour is overcome by both medium behaviours acting in co-operation. • The emergent behaviour swings in favour of Cm2 and Cm1 drops out after 1 second.
Ego-Behaviour Experiments: 3 • Tele-assisted viewing example • The “hot spot” is a camera view centred on a tool rack • In this scenario the operator moves the slave manipulator towards the tool rack • The emergent behaviour of the automated camera view tracks the end of the slave until it enters the hot spot • The ego-behaviour associated with fixating the camera on the centre of the hot spot becomes dominant, stabilising the camera on the centre • After the slave has moved away from the hotspot the camera resumes tracking of the slave tip
Evolutionary Robotics • Evolutionary Robotics falls under the category of artificial life. • Artificial Life is the study of “Life as it could be” • Based on understanding the principles and simulating the mechanisms of real biological life forms • Evolutionary robotics, as the name suggests, borrows from our knowledge of the principles of biological evolution to evolve robot controllers, sensors and/or physical morphology from the bottom up.
Artificial Evolution • Extended Genetic Algorithms are used to evolve Controllers, Bodies, Sensors and/or Actuators. • Simulation is used extensively to evaluate agent behaviours without damage to real robots and to evaluate, in a reasonable amount of time, the vast number of generations that evolution requires • Typically, only then is a final behaviour tested on a real robot.
How and what to evolve? • Highly recurrent free-form neural networks are usually used to control robot behaviours – these are suitable for evolution due to their distributed structure. • Typically, a fixed robot body is used • However, the Genetic description can also define sensor morphology and complete body shape.
How a behaviour is evolved • A task we wish to solve has to be defined • A suitable simulation is required to test the ability of agents to solve the given task quantitatively. • This quantitative measure or “fitness” is used by the Genetic Algorithm to produce successive generations of agents until a suitable level of proficiency has been acquired. • Then the proficient behaviour can be transferred to a real robot.
Artificial Evolution • Complex behaviours and structures can be evolved in simulation. • Even for simple tasks evolution can produce surprisingly complex and life-like solutions. • If a suitable simulation is used these behaviours and structures are transferable to real world robots.
Robot Sensing – Key points • Cost • Weight • Reliability • Functionality • Simplicity • Power requirements/weight • Computing requirements – on board? • Application driven – what is required?
Vision • Is this needed? • Can be expensive – computationally/financially • Can take time • Human-like – human-world?
Machine Vision • Image transformation – camera/CCD array • Image analysis – filtering, edge detection, line finding – colour, texture? • Image understanding – AI methods, segmentation, blocks world.
Range Finding/Triangulation • Passive – correspondence problem • Active triangulation - Spot sensing • Time-of flight ranging – Sonar/Laser
Proximity Sensing • Mechanical switch • Inductive/Capacitive sensors – C = εA/d – one plate on robot, one on object – change in area > change in capacitance • Magnetic sensors – reed/Hall • Optical position – phototransistor, optical interrupter, optical reflector
Tactile Sensing • Probably not necessary for a typical, industrial mobile robot • Needed when a robot performs delicate assembly • Sense force in joints • Sense touch • Sense slip
Robot Intelligence • Required intelligence will depend on sensor/actuator arrangements • Intellectual capabilities will depend on sensor/actuator capabilities • Sensors/actuators/brain(computer) will all be different to human/animal versions • RI is evolving at techno-rates not biological rates • So where will it be in 2035?