400 likes | 511 Views
Situation decomposition method extracts partial data which contains some rules. Hiroshi Yamakawa (FUJITSU LABORATORIES LTD.). Abstract.
E N D
Situation decomposition method extracts partial data which contains some rules. Hiroshi Yamakawa (FUJITSU LABORATORIES LTD.)
Abstract • In an infantile development process, the fundamental knowledge about the external world is acquired through learning without clear purposes. An adult is considered to use that fundamental knowledge for various works. The acquisition of the internal model in these early stages may exist as a background of the flexible high order function of human brain. However, research of such learning technology is not progressing to nowadays. • The system can improves prediction ability and reusability in the lasting work by using the result of learning without clear purposes. Then, we proposed the situation decomposition technology which chooses the partial information which emphasizes the relation "another attribute value will also change if one attribute value changes." • Situation decomposition technology is the technology of performing attribute selection and case selection simultaneously from the data structure from which each example constitutes an attribute vector. The newly introduced Matchability criteria are the amount of evaluations which becomes large, when the explanation range of the selected partial information becomes large and a strong relation exists in the inside. Processing of situation decomposition extracts plural partial situations (result of attribute selection and case selection) of corresponding to the local maximum points over this evaluation. • Furthermore, extraction of partial problem space (based on the Markov decision process) is possible using the technology which extended situation decomposition in the direction of time. In action decision task, such as robot control, partial problem space can be assigned as each module of multi-module architecture. Then, it can be efficiently adapted to unknown problem space by combining extracted plural partial problem space.
My strategy for Brain-like processing • Brain has very flexible learning ability. • The intelligent processes which has more flexible learning abilities are more close to real brain processes. • I want introduce learning ability to my system as possible.
Contents • Development and Autonomous Learning • SOIS (Self-organizing Information Selection) as Pre-task Learning • Delivering Matchable Principle • Situation Decomposition using Matchability Criterion • Application of Situation Decomposition • Conclusions & Future works
Outline of this talk Cognitive Development Autonomous Learning (Framework) Task learning Pre-task learning Self-organizingInformation Selection Matchable Principle Situation decomposition Matchability Criterion Situation Decomposition using Matchability Criterion
Two aspects of Development • “Acquired environmental knowledge without particular goals which helps for problem solving for particular goals” • → “Pre-task Learning” in Autonomous Learning • “Calculation process which increases the predictable and/or operable object in the world” • → Enhancing prediction ability
Autonomous Learning: AL • Two phases learning (Research in RWC) Environment is given Goal is given Existing Knowledge Pre-task learning Task learning General fact For design Acquiring environmentalknowledge Acquiring solution for goal goal No reachingover the wall Acquiringmovable paths Generatingpath to the goal Development Today’s Topic
Pre-task Learning helps Task Learning Development Today’s Topic • Autonomous Learning (AL) • Pre-task Learning • Acquiring environmental knowledge without particular goal. • Task Learning • Environmental knowledge speed up aacquiring solution for goal. • In human: • Adult people can solve given task quickly using environmental knowledge acquired for other goal or without particular goal. Development ~ Pre-task Learning
Research topics for AL Development Today’s Topic • Pre-task Learning (How to acquire environmental knowledge) • Situation Decomposition using Matchability criterion • Situation Decomposition is kind of a Self-organizing Information Selection technology. • Task learning (How to use environmental knowledge) • CITTA (Cognition based Intelligent Transaction Architecture) • Multi-module architecture which can combining environmental knowledge acquired during Pre-task learning • Cognitive Distance Learning • Goal driven problem solver for each environmental knowledge.
Overview of Approaching for AL CITTA Combining environmental knowledge Architecture SituationDecomposition Acquiring environmental knowledge Cognitive Distance Learning Problem solver for each environmental knowledge Learning algorithm Pre-task Learning Task Learning
SOIS (Self-organizing Information Selection) as Pre-task Learning
SOIS: Self-organizing Information Selection • Process: Selecting plural partial information from data. • → “Situation Decomposition” • Criterion: Evaluation for each partial information. • → Matchability Criterion Knowledge = Set of structure. Partial Information = One kind of structure ※ SOIS could be a kind of knowledge acquiring process in development
Situation Decomposition is kind of SOIS For situation decompositionPartial Information = Situation • Extracting plural situations which are combination of selected attributes and cases from spread sheet. MS1 MS3 Cases MS2 MS4 attributes
Two aspects of Development • “Acquired environmental knowledge without particular goals which helps solving problem for particular goals” • → “Pre-task Learning” in Autonomous Learning • “Calculation process which increases the predictable and/or operable object in the world” • → Enhancing prediction ability
How to enhance prediction ability • We needs Criterion for selecting situation. • We wants to extract local structures. MS2 MS1 Situation Decomposition MS4 MS3 Multiplex local structure is mixed in real world data
Extracting structure (knowledge) without particular goals. Prediction is based on matching a case with experiences. Deriving Matchable Principal • What is Criterion for each selecting situation. • Matchable principle • “Structures where a matching opportunity is large are selected.”
Factors in Matchable Principle • To increase matching opportunity Relation in Structure Simplicity of Structure Our proposed Matchability criterion Ockham’s razor MDL、AIC Coverage for Data Consistency for Data Association rule Case-increasing Attribute-increasing Accuracy Minimize error
MS1 MS3 Cases MS2 MS4 attributes Situation Decomposition • Extracting plural situations which are combination of selected attributes and cases from spread sheet. Matchability=This criteria evaluates matching opportunity Matchable Situation = Local maximums of Matchability
Formalization: Whole situation and Partial situations • Whole situation J=(D, N) : Contains N attributes and D cases. • Attribute selection vector: • d = (d1, d2,…,dD) • Case selection vector : • n = (n1, n2,…,nN) Vector element di,ni are binary indicator of selection/unselection. • Number of selected attributes: d • Number of selected cases : n Situation decomposition extracts some matchable situations from whole situation J=(D, N) which potentially contains 2D+N partial situation.
Case selection using Segment space Segment space is multiplication of separation of each selected attributes. (example: two dimension) Sd=s1s2 n: Number of selected cases Sd: Number of total segments rd: Number of selected segments attribute2 attribute1 ※ Cases inside the chosen segments are surely chosen.
Matchability criterion from Matchable Principle [Number of selected segments] rd →Make Smaller Simplicity of Structure • [Number of selected cases] • n→Make Larger • [Number of total segments] • Sd →Make Larger Coverage for Data rd n n rd Sd N: Total number of cases, C1, C2, C3: Positive constant
Matchability Focuses in covariance • Types of Relations • Coincidence • The relation to which two events happen simultaneously • Covariance • The relation that another attribute value will also change if one attribute value changes Matchability: • Estimates covariancein selected data for categorical attributes.
How to find situations Algorithms searches local maximums of Matchability Criterion. • Algorithm Overview • foreach subset of d of D • Search Local maximums • Reject saddle point • end • Time complexity ∝ 2D
Input situation Mixture of cases on two plains. Situation A: x + z = 1 Situation B: y +z = 1 Extracted situation Input Situations MS1=Input Situation A MS2= Input Situation B A New Situation MS3: line x = y, x + z = 1 Simple example
Generalization ability • Multi-valued function φ:(x,y)→z • Even if the input situation A (x+z=1) lacks half of its parts, such that no data exists in the range y>0.5, our method outputs φMS1(0,1)=1.0.
Multi-module Prediction System Input Output
Training cases and Test cases • ●Training cases 500 cases are sprayed on each plain in uniform distribution in the range x=[0.0, 1.0] and y=[0.0, 1.0]. • ●Test cases 11×11 cases are arranged to notches at a regular interval of 0.1 on each plane q: sampling rate
Prediction Result without Matchable Situation with Matchable Situation
Autonomous Learning: AL • Two step learning (Research in RWC) Environment is given Goal is given Existing Knowledge Pre-task learning Task learning General fact For design Acquiring environmentalknowledge Acquiring solution for goal goal No reachingover the wall Acquiringmovable paths Generatingpath to the goal Development Today’s Topic
Demonstration of Autonomous Learning Door & Key task with CITTA Agent acquire knowledge as situation Door can open by the key. Key Telephone Mobile Agent Door Goal Start
Each Situation is used as Module Task Learning Pre-task Learning Combining Matchable Situation ExtractingMatchable Situation Mobile Agent Matchable Situation2 Matchable Situation i Matchable Situation i Open door by telephone Matchable Situation1 Open door by Key Matchable Situation i Go by wall Matchable Situation i Go straight Input/Output Position Action Object Belongings … Environment
Situation Decomposition in AL • SD in Pre-task learning: • Situation decomposition handles input /output vector of two time step for extracts Markov process. • Advantages by SD in Task learning: • Adaptation by combining situations are possible. • Learning data can be reduced, because learning space for each module is reduced.
Conclusions Cognitive Development Autonomous Learning Task learning Pre-task learning Self-organizingInformation Selection Matchable Principle Situation decomposition Matchability Criterion Situation Decomposition using Matchability Criterion
Conclusions & Future workSituation decomposition • Matchability is new model selection criterion maximizing matching opportunity, which emphasize Coverage for data. In opposition ockham’s razor emphasize the Consistency for data. Decomposed situations by matchability criterion has powerful prediction ability. Situation decomposition method can be applied to pre-processing of data analysis, self-organization, pattern recognition and so on.
Future work • Situation decomposition: • Needs theoretical research on Matchabilty criterion. • This intuitively delivered criterion affected unbalanced data. • Needs speed up for large-scale problem. • Exponential time complexity for number of attribute is awful. • Advanced Self-organized Information Selection • Situation decomposition method only selects set of attributes and cases • Autonomous Learning: • Relates with the knowledge of cognitive science.