1 / 25

Mental Development and Representation Building through Motivated Learning

Mental Development and Representation Building through Motivated Learning. Janusz A. Starzyk, Ohio University, USA, Pawel Raif, Silesian University of Technology, Poland, Ah-Hwee Tan, Nanyang Technological University, Singapore.

Download Presentation

Mental Development and Representation Building through Motivated Learning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk, Ohio University, USA, Pawel Raif, Silesian University of Technology, Poland, Ah-Hwee Tan, Nanyang Technological University, Singapore 2010 International Joint Conference on Neural Networks, Barcelona

  2. Embodied Intelligence (EI) Embodiment of Mind Computational Approaches to Machine Learning How to Motivate a Machine Motivated Learning (ML) Building representation through motivated learning ML agent in „Normal” vs. „Graded” Environment ML agent vs. RL agent in „Graded” Environment Future work Outline

  3. Traditional AIEmbodied Intelligence Abstract intelligence attempt to simulate “highest” human faculties: language, discursive reason, mathematics, abstract problem solving Environment model Condition for problem solving in abstract way “brain in a vat” Embodiment knowledge is implicit in the fact that we have a body embodiment supports brain development Intelligence develops through interaction with environment Situated in environment Environment is its best model

  4. Embodied Intelligence Mechanism: biological, mechanical or virtual agent with embodied sensors and actuators EI acts on environment and perceives its actions Environment hostility: is persistent and stimulates EI to act Hostility: direct aggression, pain, scarce resources, etc EI learns so it must have associative self-organizing memory Knowledge is acquired by EI • Definition • Embodied Intelligence (EI) is a mechanism that learns how to minimize hostility of its environment

  5. Intelligence An intelligent agent learnshow to survive in a hostile environment.

  6. Embodiment of a Mind • Embodiment: is a part of environment under control of the mind • It contains intelligence core and sensory motor interfaces to interact with environment • It is necessary for development of intelligence • It is not necessarily constant

  7. Embodiment of Mind • Changes in embodiment modify brain’s self-determination • Brain learns its own body’s dynamics • Self-awareness is a result of identification with own embodiment • Embodiment can be extended by using tools and machines • Successful operation is a function of correct perception of environment and own embodiment

  8. ComputationalApproaches to Machine Learning • Machine Learning • Supervised • Unsupervised • Reinforced • problems with Complex environments • lack of motivation • Motivated Learning • Definition • Need for benchmarks

  9. How to Motivate a Machine ? A fundamental question is how to motivate an agent to do anything, and in particular, to enhance its own complexity? What drives an agent to explore the environment build representations and learn effective actions? What makes it successful learner in changing environments?

  10. How to Motivate a Machine ? • Although artificial curiosity helps to explore the environment, it leads to learning without a specific purpose. • We suggest that the hostility of the environment, required for EI, is the most effective motivational factor. • Both are needed - hostility of the environment and intelligence that learns how to reduce the pain. Fig. englishteachermexico.wordpress.com/

  11. Motivated Learning • Definition*: Motivated learning (ML) is pain based motivation, goal creation and learning in embodied agent. • It uses externally defined pain signals. • Machine is rewarded for minimizing the primitive pain signals. • Machine creates abstract goals based on the primitive pain signals. • It receives internal rewards for satisfying its abstract goals. • ML applies to EI working in a hostile environment. *J. A. Starzyk, Motivation in Embodied Intelligence,Frontiers in Robotics, Automation and Control, I-Tech Education and Publishing, Oct. 2008, pp. 83-110.

  12. Neuralself-organizing structures in ML • Motivations and selection of a goal • WTA competition selects motivation • another WTA selects implementation • a primitive pain is directly sensed • thresholded curiosity based pain • Goal creation scheme • an abstract pain is introduced • by solving lower level pain

  13. Building representation through motivated learningExperiments…

  14. The least abstract The most abstract Food Grocery Bank Office School Sandbox Base Task Specification • Environment • Environment consist of six different categories of resources. • Five of them have limited availability. • One, the most abstract resource is inexhaustible.

  15. Base Experiment - Task Specification Agentuses resources performing proper actions. There are 36 possible actions but only six of them are meaningful and at a given situation (environment’s and agent’s state) there is usually one best action to perform. The problem is: determine which action should be performed renewing in time the most needed resource. Meaningful sensory-motor pairs and their effect on the environment:

  16. Howtosimulatecomplexityandhostilityof environment 1. Complexity Different resources are available in the environment. Agent should learn dependencies between resources and its actions to operate properly. 1 Feast School Office Bank Grocery Food 2. Hostility Function which describes the probability of finding resources in the environment. 2 Harsh environment Mild environment

  17. Base Experiment Results RL agent (leftside) canlearndependenciesbetweenonlyfewbasic resources. In contrastML agent isable to learndependenciesbetweenall resources. 1 RL ML In a harsh environment MLagent is able to control its environment (and limit its ‘primitive pain’) but RL agent cannot 2

  18. ML agent in „Normal” vs. „Graded” Environment • Two kinds of environments - “normal” (1) and “graded” (2). • “Graded” environment corresponds to gradual development and representation building • Simulations in four environments with: 6, 10, 14 and 18 different hierarchy levels each one representing different resource. 1 2 … … Resources Resources Time Time

  19. ML agent in „Normal” vs. „Graded” Environment • ML agent learns more effectively in the ”graded” environments with gradually increasing complexity. • In a complex environment this • difference becomes more significant. • “gradual” learning is beneficial to mental development

  20. ML agent vs. RL agent in „Graded” Environment. • The second group of experiments compares effectiveness of ML and RL based agents. • In this simulation we have used “graded” environments with gradually increasing complexity. • We simulated environments with:6, 10, 14, 18 levels of hierarchy. Resources … Time

  21. ML agent vs. RL agent in „Graded” Environment. • 6 levels of hierarchy • Initially ML agent experiences similar primitive pain signal Pp as RL agent. • ML agent converges quickly to a stable performance. • 10 levels of hierarchy • Initially RL agent experiences lower primitive pain signal Pp than ML agent. • RL agent’s pain increases when environment is more hostile.

  22. ML agent vs. RL agent in „Graded” Environment. 14 levels of hierarchy ML agent keeps learning while RL agent exploits early knowledge In effect, RL doesn’t learn all dependencies it time to survive 18 levels of hierarchy Similar results to 10 and 14 levels

  23. Future work state state action RL action GC reward reward RL GOALS (motivations)

  24. References: • Starzyk J.A., Raif P., Ah-Hwee Tan, Motivated Learning as an Extension of Reinforcement Learning, Fourth International Conference on Cognitive Systems, CogSys 2010, ETH Zurich, January 2010. • Starzyk J.A., Raif P., Motivated Learning Based on Goal Creation in Cognitive Systems, Thirteenth International Conference on Cognitive and Neural Systems, Boston University, May 2009. • J. A. Starzyk, Motivation in Embodied Intelligence,Frontiers in Robotics, Automation and Control, I-Tech Education and Publishing, Oct. 2008, pp. 83-110.

  25. Questions?

More Related