590 likes | 695 Views
CS 785 Fall 2004. Knowledge Acquisition and Problem Solving. Mixed-initiative Problem Solving and Knowledge Base Refinement. Gheorghe Tecuci tecuci@gmu.edu http://lac.gmu.edu/. Learning Agents Center and Computer Science Department George Mason University. Overview.
E N D
CS 785 Fall 2004 Knowledge Acquisition and Problem Solving Mixed-initiative Problem Solving and Knowledge Base Refinement Gheorghe Tecuci tecuci@gmu.eduhttp://lac.gmu.edu/ Learning Agents Center and Computer Science Department George Mason University
Overview The Rule Refinement Problem and Method: Illustration General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading
The rule refinement problem (definition) GIVEN: • a plausible version space rule; • a positive or a negative example of the rule (i.e. a correct or an incorrect problem solving episode); • a knowledge base that includes an object ontology and a set of problem solving rules; • an expert that understands why the example is positive or negative, and can answer agent’s questions. DETERMINE: • an improved rule that covers the example if it is positive, or does not cover the example if it is negative; • an extended object ontology (if needed for rule refinement).
Initial example from which the rule was learned I need to Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? US_1943 Therefore I need to Identify and test a strategic COG candidate for US_1943 This is an example of a problem solving step from which the agent will learn a general problem solving rule.
Learned rule to be refined IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 IF Identify and test a strategic COG candidate corresponding to a member of the ?O1 QuestionWhich is a member of ?O1 ? Answer?O2 explanation ?O1 has_as_member ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force THEN Identify and test a strategic COG candidate for ?O2 Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force INFORMAL STRUCTURE OF THE RULE THEN Identify and test a strategic COG candidate for a force The force is ?O2 FORMAL STRUCTURE OF THE RULE
The agent uses the partially learned rules in problem solving. The solutions generated by the agent, when it uses the plausible upper bound condition, have to be confirmed or rejected by the expert. We will now present how the agent improves (refines) its rules based on these examples. In essence, the plausible lower bound condition is generalized and the plausible upper bound condition is specialized, both conditions converging toward one another. The next slide illustrates the rule refinement process. Initially the agent does not contain any task or rule in its knowledge base. The expert is teaching the agent to reduce the task: Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 To the task Identify and test a strategic COG candidate corresponding to a member of the US_1943 From this task the agent learns a plausible version space task reduction rule, as has been illustrated before. Now the agent can use this rule in problem solving, proposing to reduce the task Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 To the task Identify and test a strategic COG candidate for Germany_1943 The expert accepts this reduction as correct, and the agent refines the rule. In the following we will show the internal reasoning of the agent that corresponds to this behavior.
Rule refinement method Learning by Analogy And Experimentation Knowledge Base PVS Rule Failure explanation Example of task reductions generated by the agent Incorrect example Correct example Learning from Explanations Learning from Examples
Version space rule learning and refinement + Let E1 be the first task reduction from which the rule is learned. UB The agent learns a rule with a very specific lower bound condition (LB) and a very general upper bound condition (UB). LB E1 UB Let E2 be a new task reduction generated by the agent and accepted as correct by the expert. Then the agent generalizes LB as little as possible to cover it. LB + + E2 Let E3 be a new task reduction generated by the agent which is rejected by the expert. Then the agent specialize UB as little as possible to uncover it and to remain more general than LB. UB LB + _ + E3 … _ UB=LB After several iterations of this process LB may become identical with UB and a rule with an exact condition is learned. _ + + + _ +
I need to 2 1 Identify and test a strategic COG candidate corresponding to a member of the Allied_Forces_1943 Which is a member of Allied_Forces_1943? Learns Provides an example Rule_15 US_1943 Therefore I need to Identify and test a strategic COG candidate for US_1943 … I need to 3 5 Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 Applies Rule_15 Which is a member of European_Axis_1943? Refines ? Rule_15 4 Germany_1943 Therefore I need to Identify and test a strategic COG candidate for Germany_1943 Accepts the example
Rule refinement with a positive example Positive example that satisfies the upper bound IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 I need to Identify and test a strategic COG candidate corresponding to a member of the European_Axis_1943 Therefore I need to explanation ?O1 has_as_member ?O2 Identify and test a strategic COG candidate for Germany_1943 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Condition satisfied by positive example ?O1 is European_Axis_1943 has_as_member ?O3 ?O2 is Germany_1943 less general than Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force explanation European_Axis_1943 has_as_member Germany_1943 THEN Identify and test a strategic COG candidate for a force The force is ?O2
The upper right side of this slide shows an example generated by the agent. This example is generated because it satisfies the plausible upper bound condition of the rule (as shown by the red arrows). This example is accepted as correct by the expert. Therefore the plausible lower bound condition is generalized to cover it as shown in the following.
Minimal generalization of the plausible lower bound Plausible Upper Bound Condition?O1 is multi_member_force has_as_member ?O2 ?O2 is force less general than (or at most as general as) New Plausible Lower Bound Condition?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force minimal generalization Condition satisfied by the positive example ?O1 is European_Axis_1943 has_as_member ?O2 ?O2 is Germany_1943 Plausible Lower Bound Condition (from rule) ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force
The lower left side of this slide shows the plausible lower bound condition of the rule. The lower right side of this slide shows the condition corresponding to the generated positive example. These two conditions are generalized as shown in the middle of this slide, by using the climbing generalization hierarchy rule. Notice, for instance, that equal_partners_multi_state_alliance and European_Axis_1943 are generalized to multi_state_alliance. This generalization is based on the object ontology, as illustrated in the following slide. Indeed, multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943.
Forces … composition_of_forces force single_member_force multi_member_force single_state_force single_group_force multi_state_force multi_group_force US_1943 Germany_1943 multi_state_alliance multi_state_coalition dominant_partner_ multi_state_alliance dominant_partner_ multi_state_coalition multi_state_alliance is the minimal generalization of equals_partners_multi_state_alliance that covers European_Axis_1943 equal_partners_ multi_state_alliance equal_partners_ multi_state_coalition European_Axis_1943 Allied_Forces_1943
Refined rule IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 IF Identify and test a strategic COG candidate corresponding to a member of a force The force is ?O1 explanation ?O1 has_as_member ?O2 explanation ?O1 has_as_member ?O2 Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force Plausible Upper Bound Condition ?O1 is multi_member_force has_as_member ?O2 ?O2 is force generalization Plausible Lower Bound Condition ?O1 is equal_partners_multi_state_alliance has_as_member ?O2 ?O2 is single_state_force Plausible Lower Bound Condition ?O1 is multi_state_alliance has_as_member ?O2 ?O2 is single_state_force THEN Identify and test a strategic COG candidate for a force The force is ?O2 THEN Identify and test a strategic COG candidate for a force The force is ?O2
Overview The Rule Refinement Problem and Method: Illustration General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading
The rule refinement method: general presentation Let R be a plausible version space rule, U its plausible upper bound condition, L its plausible lower bound condition, and E a new example of the rule. 1. If E is covered by U but it is not covered by L then • If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U. • If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L. Alternatively, both bounds need to be specialized. 2. If E is covered by L then • If E is a positive example then R need not to be refined. • If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule. 3. If E is not covered by U then • If E is a positive example then it represents a positive exception to the rule. • If E is a negative example then no refinement is necessary.
The rule refinement method: general presentation • If E is covered by U but it is not covered by L then • • If E is a positive example then L needs to be generalized as little as possible to cover it while remaining less general or at most as general as U. UB UB LB LB + + + + + +
The rule refinement method: general presentation • If E is covered by U but it is not covered by L then • • If E is a negative example then U needs to be specialized as little as possible to no longer cover it while remaining more general than or at least as general as L. • Alternatively, both bounds need to be specialized. Strategy 1: Specialize UB by using a specialization rule (e.g. the descending the generalization hierarchy rule, or specializing a numeric interval rule). UB UB _ _ LB LB + + + +
The rule refinement method: general presentation Strategy 2: Find a failure explanation EXw of why E is a wrong problem solving episode. EXw identifies the features that make E a wrong problem solving episode. The inductive hypothesis is that the correct problem solving episodes should not have these features. EXw is taken as an example of a condition that the correct problem solving episodes should not satisfy, an Except-When condition. The Except-when condition needs also to be learned, based on additional examples. Based on EXw an initial Except-When plausible version space condition is generated. UB UB LB LB + + _ + +
The rule refinement method: general presentation Strategy 3: Find an additional explanation EXw for the correct problem solving episodes, which is not satisfied by the current wrong problem solving episode. Specialize both bounds of the plausible version space condition by: - adding the most general generalization of EXw, corresponding to the examples encountered so far, to the upper bound; - adding the least general generalization of EXw, corresponding to the examples encountered so far, to the lower bound. UB UB LB LB + + _ _ + +
The rule refinement method: general presentation 2. If E is covered by L then • If E is a positive example then R need not to be refined. UB LB + + +
The rule refinement method: general presentation - 2. If E is covered by L then • If E is a negative example then both U and L need to be specialized as little as possible to no longer cover this example while still covering the known positive examples of the rule. If this is not possible, then the E represents a negative exception to the rule. Strategy 1: Find a failure explanation EXw of why E is a wrong problem solving episode and create an Except-When a plausible version space condition, as indicated before. UB UB LB LB + + - + +
The rule refinement method: general presentation 3.If E is not covered by U then • If E is a positive example then it represents a positive exception to the rule. • If E is a negative example then no refinement is necessary. - + UB UB LB LB + + + +
Overview The Rule Refinement Problem and Method: Illustration General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading
Initial example from which a rule was learned IF the task to accomplish is Identify the strategic COG candidates with respect to the industrial civilization of US_1943 Who or what is a strategicallycritical industrial civilizationelement in US_1943? THEN Industrial_capacity_of_US_1943 industrial_capacity_of_US_1943 is a strategic COG candidate for US_1943
Learned PVS rule to be refined IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF Identify the strategic COG candidates with respect to the industrial civilization of ?O1 QuestionWho or what is a strategically critical industrialcivilization element in ?O1 ? Answer?O2 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Plausible Upper Bound Condition?O1 IS Force has_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product THEN ?O2 is a strategic COG candidate for ?O1 INFORMAL STRUCTURE OF THE RULE Plausible Lower Bound Condition ?O1 IS US_1943has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 FORMAL STRUCTURE OF THE RULE
Positive example covered by the upper bound Positive example that satisfies the upper bound IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF the task to accomplish is Identify the strategic COG candidates with respect to the industrial civilization of a force The force is Germany_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 THEN accomplish the task A strategic COG relevant factor is strategic COG candidate for a force The force is Germany_1943 The strategic COG relevant factor is Industrial_capacity_of_Germany_1943 Plausible Upper Bound Condition?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Condition satisfied by positive example ?O1 IS Germany_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_Germany_1943 is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_fuel_of_Germany_1943 less general than Plausible Lower Bound Condition ?O1 IS US_1943has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 explanation Germany_1943 has_as_industrial_factor Industrial_capacity_of_Germany_1943 Industrial_capacity_of_Germany_1943 is_a_major_generator_of War_materiel_and_fuel_of_Germany_1943 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2
Minimal generalization of the plausible lower bound Plausible Upper Bound Condition?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product less general than (or at most as general as) New Plausible Lower Bound Condition?O1 IS Single_state_forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials minimal generalization Plausible Lower Bound Condition (from rule) ?O1 IS US_1943has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 Condition satisfied by the positive example ?O1 IS Germany_1943 has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_Germany_1943 is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_fuel_of_Germany_1943
Generalization hierarchy of forces <object> Force Group Opposing_force Multi_state_force Single_state_force Multi_group_force Single_group_force component_state US_1943 Anglo_allies_1943 component_state Britain_1943 component_state Germany_1943 European_axis_1943 component_state Italy_1943
Generalized rule IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O4 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Plausible Upper Bound Condition?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Plausible Upper Bound Condition?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Plausible Lower Bound Condition ?O1 IS US_1943has_as_industrial_factor ?O2 ?O2 IS Industrial_capacity_of_US_1943 is_a_major_generator_of ?O3 ?O3 IS War_materiel_and_transports_of_US_1943 Plausible Upper Bound Condition?O1 IS Single_state_forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2
A negative example covered by the upper bound Negative example that satisfies the upper bound IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF the task to accomplish is Identify the strategic COG candidates with respect to the industrial civilization of a force The force is Italy_1943 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 THEN accomplish the task A strategic COG relevant factor is strategic COG candidate for a force The force is Italy_1943 The strategic COG relevant factor isFarm_implement_industry_of_Italy_1943 Plausible Upper Bound Condition?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Condition satisfied by positive example ?O1 IS Italy_1943 has_as_industrial_factor ?O2 ?O2 IS Farm_implement_industry_of_Italy_1943 is_a_major_generator_of ?O3 ?O3 IS Farm_implements_of_Italy_1943 less general than Plausible Upper Bound Condition?O1 IS Single_state_forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials explanation Italy_1943 has_as_industrial_factor Farm_implement_industry_of_Italy_1943 Farm_implement_industry_of_Italy_1943 is_a_major_generator_of Farm_implements_of_Italy_1943 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2
Automatic generation of plausible explanations IF Identify the strategic COG candidates with respect to the industrial civilization of Italy_1943 No! Who or what is a strategicallycritical industrial civilizationelement in Italy_1943? explanation Italy_1943 has_as_industrial_factor Farm_implement_industry_of_Italy_1943 Farm_implement_industry_of_Italy_1943 is_a_major_generator_of Farm_implements_of_Italy_1943 Industrial_capacity_of_Italy_1943 THEN Industrial_capacity_of_Italy_1943is a strategic COG candidate for Italy_1943 The agent generates a list of plausible explanations from which the expert has to select the correct one: Farm_implement_industry_of_Italy_1943 IS_NOT Industrial_capacity Farm_implements_of_Italy_1943 IS_NOT Strategically_essential_goods_or_materiel
Minimal specialization of the plausible upper bound Plausible Upper Bound Condition (from rule)?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product specialization Condition satisfied by the negative example ?O1 IS Italy_1943 has_as_industrial_factor ?O2 ?O2 IS Farm_implement_industry_of_Italy_1943 is_a_major_generator_of ?O3 ?O3 IS Farm_Implements_of_Italy_1943 New Plausible Upper Bound Condition ?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materiel more general than(or at least as general as) New Plausible Lower Bound Condition?O1 IS Single_state_forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materiel
Fragment of the generalization hierarchy <object> Resource_or_ infrastructure_element UB Product Strategically_essential_resource_or_infrastructure_element Raw_material specialization Non-strategically_essential goods_or_services subconcept_of Strategic_raw_material Strategically_essential_goods_or_materiel LB instance_of subconcept_of Strategically_essential_ infrastructure_element War_materiel_and_transports subconcept_of + War_materiel_and_fuel + Main_airport Main_seaport Farm-implements of_Italy_1943 _ Sole_airport Sole_seaport
Specialized rule IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 IF Identify the strategic COG candidates with respect to the industrial civilization of a force The force is ?O1 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 explanation ?O1 has_as_industrial_factor ?O2 ?O2 is_a_major_generator_of ?O3 Plausible Upper Bound Condition?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials Plausible Upper Bound Condition?O1 IS Forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_factor is_a_major_generator_of ?O3 ?O3 IS Product Plausible Upper Bound Condition?O1 IS Single_state_forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials Plausible Upper Bound Condition?O1 IS Single_state_forcehas_as_industrial_factor ?O2 ?O2 IS Industrial_capacity is_a_major_generator_of ?O3 ?O3 IS Strategically_essential_goods_or_materials THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2 THEN A strategic COG relevant factor is strategic COG candidate for a force The force is ?O1 The strategic COG relevant factor is ?O2
Overview The Rule Refinement Problem and Method: Illustration General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading
Control of modeling, learning and problem solving Input Task Mixed-Initiative Problem Solving Ontology + Rules Generated Reduction Reject Reduction New Reduction Accept Reduction Solution Rule Refinement Task Refinement Modeling Rule Refinement Formalization Learning
This slide shows the interaction between the expert and the agent when the agent has already learned some rules. • This interaction is governed by the mixed-initiative problem solver. • The expert formulates the initial task. • Then the agent attempts to reduce this task by using the previously learned rules. Let us assume that the agent succeeded to propose a reduction to the current task. • The expert has to accept it if it is correct, or he has to reject it, if it is incorrect. • If the reduction proposed by the agent is accepted by the expert, the rule that generated it and its component tasks are generalized. Then the process resumes, the agent attempting to reduce the new task. • If the reduction proposed by the agent is rejected, then the agent will have to specialize the rule, and possibly its component tasks. • In this case the expert will have to indicate the correct reduction, going through the normal steps of modeling, formalization, and learning. Similarly, when the agent cannot propose a reduction of the current task, the expert will have to indicate it, again going through the steps of modeling, formalization and learning. • The control of this interaction is done by the mixed-initiative problem solver tool.
A systematic approach to agent teaching Identify and test a strategic COG candidate for the Sicily_1943 scenario Allies_Forces_1943 European_Axis_1943 individual states alliance individual states alliance 14 13 US_1943 Britain_1943 Germany_1943 Italy_1943 11 12 government government Other factors Other factors 1 6 people people 5 10 2 7 military military 3 8 economy economy 9 4
This slide shows a recommended order of operations for teaching the agent: • Modeling for branches #1 through #5 • Rule Learning for branches #1 through #5 • Problem solving, Rule refinement, Modeling, and Rule Learning for branches #6 through #10You will notice that several of the rules learned from branch #1 will apply to generate branch #6. One only needs to model and teach Disciple for those steps where the previously learned rules do not apply (i.e. for the aspects where there are significant differences between US_1943 and Britain_1943 with respect to their governments).Similarly, several of the rules learned from branch #2 will apply to generate branch #7, an so on. • Problem solving, Rule refinement, Modeling, and Rule Learning for branches #11 and #12Again, many of the rules learned from branches #1 through #10, will apply for the branches #11 and #12. • Modeling for branches #13 • Rule Learning for branches #13 • Problem solving, Rule refinement, Modeling, and Rule Learning for branches #14
Overview The Rule Refinement Problem and Method: Illustration General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading
This slide shows the relationship between the plausible lower bound condition, the plausible upper bound condition, and the exact (hypothetical) condition that the agent is attempting to learn. During rule learning, both the upper bound and the lower bound are generalized and specialized to converge toward one another and toward the hypothetical exact condition. This is different from the classical version space method where the upper bound is only specialized and the lower bound is only generalized. Notice also that, as opposed to the classical version space method (where the exact condition is always between the upper and the lower bound conditions), in Disciple the exact condition may not include part of the plausible lower bound condition, and may include a part that is outside the plausible upper bound condition. We say that the plausible lower bound is, AS AN APPROXIMATION, less general than the hypothetical exact condition. Similarly, the plausible upper bound is, AS AN APPROXIMATION, more general than the hypothetical exact condition. These characteristics are a consequence of the incompleteness of the representation language (i.e. the incompleteness of the object ontology), of the heuristic strategies used to learn the rule, and of the fact that the object ontology may evolve during learning.
Characterization of the rule learning method Uses the explanation of the first positive example to generate a much smaller version space than the classical version space method. Conducts an efficient heuristic search of the version space, guided by explanations, and by the maintenance of a single upper bound condition and a single lower bound condition. Will always learn a rule, even in the presence of exceptions. Learns from a few examples and an incomplete knowledge base. Uses a form of multistrategy learning that synergistically integrates learning from examples, learning from explanations, and learning by analogy, to compensate for the incomplete knowledge. Uses mixed-initiative reasoning to involve the expert in the learning process. Is applicable in complex real-world domains, being able to learn within a complex representation language.
Problem solving with PVS rules PVS Condition Except-When PVS Condition Rule’s conclusion is (most likely) incorrect Rule’s conclusion is (most likely) incorrect Rule’s conclusion is plausible Rule’s conclusion is not plausible Rule’s conclusion is (most likely) correct
Overview The Rule Refinement Problem and Method: Illustration General Presentation of the Rule Refinement Method Another Illustration of the Rule Refinement Method Integrated Modeling, Learning, and Problem Solving Characterization of the Disciple Rule Learning Method Demo: Problem Solving and Rule Refinement Recommended Reading
DISCIPLE-RKF Disciple-RKF/COG: Integrated Modeling, Learning and Problem Solving
Disciple uses the partially learned rules in problem solving and refines them based on expert’s feedback. This is done in the Refining mode.
Disciple applies previously learned rules in other similar cases The expert can expand the “More…” node to view the solution generated by the rule