680 likes | 696 Views
Template knowledge models. Reusing knowledge model elements. Lessons. Knowledge models partially reused in new applications Type of task = main guide for reuse Catalog of task templates small set in this book see also other repositories. The need for reuse.
E N D
Template knowledge models Reusing knowledge model elements
Lessons • Knowledge models partially reused in new applications • Type of task = main guide for reuse • Catalog of task templates • small set in this book • see also other repositories
The need for reuse • prevent "re-inventing the wheel" • cost/time efficient • decreases complexity • quality-assurance
Task template • reusable combination of model elements • (provisional) inference structure • typical control structure • typical domain schema from task point-of-view • specific for a task type • supports top-down knowledge modeling
A typology of tasks • range of task types is limited • advantage of KE compared to general SE • background: cognitive science/psychology • several task typologies have been proposed in the literature • typology is based on the notion of “system”
The term “system” • abstract term for object to which a task is applied. • in technical diagnosis: artifact or device being diagnosed • in elevator configuration: elevator to be designed • does not need to exist (yet)
Analytic versus synthetic tasks • analytic tasks • system pre-exists • it is typically not completely "known" • input: some data about the system, • output: some characterization of the system • synthetic tasks • system does not yet exist • input: requirements about system to be constructed • output: constructed system description
Structure of template description in catalog • General characterization • typical features of a task • Default method • roles, sub-functions, control structure, inference structure • Typical variations • frequently occurring refinements/changes • Typical domain-knowledge schema • assumptions about underlying domain-knowledge structure
Classification • establish correct class for an object • object should be available for inspection • "natural" objects • examples: rock classification, apple classification • terminology: object, class, attribute, feature • one of the simplest analytic tasks; many methods • other analytic tasks: sometimes reduced to classification problem especially diagnosis
Classification: pruning method • generate all classes to which the object may belong • specify an object attribute • obtain the value of the attribute • remove all classes that are inconsistent with this value
Classification: method control while new-solution generate(object -> candidate) do candidate-classes := candidate union candidate-classes; while new-solutionspecify(candidate-classes -> attribute) and length candidate-classes > 1 do obtain(attribute -> new-feature); current-feature-set := new-feature union current-feature-set; for-each candidate in candidate-classes do match(candidate + current-feature-set -> truth-value); if truth-value = false; then candidate-classes := candidate-classes subtract candidate;
Classification: method variations • Limited candidate generation • Different forms of attribute selection • decision tree • information theory • user control • Hierarchical search through class structure
Assessment • find decision category for a case based on domain-specific norms. • typical domains: financial applications (loan application), community service • terminology: case, decision, norms • some similarities with monitoring • differences: • timing: assessment is more static • different output: decision versus discrepancy
Assessment: abstract & match method • Abstract the case data • Specify the norms applicable to the case • e.g. “rent-fits-income”, “correct-household-size” • Select a single norm • Compute a truth value for the norm with respect to the case • See whether this leads to a decision • Repeat norm selection and evaluation until a decision is reached
Assessment:inference structure case abstract abstracted specify norms select case evaluate norm norm decision match value
Assessment: method control while new-solution abstract(case-description -> abstracted-case) do case-description := abstracted-case; end while specify(abstracted-case -> norms); repeat select(norms -> norm); evaluate(abstracted-case + norm -> norm-value); evaluation-results := norm-value union evaluation-results; until has-solution match(evaluation-results -> decision);
Assessment control: UML notation [more abstractions] abstract specify norms [no more abstractions] [match fails [match succeeds: no decision] decision found] select norm evaluate match norm decision
Assessment: method variations • norms might be case-specific • cf. housing application • case abstraction may not be needed • knowledge-intensive norm selection • random, heuristic, statistical • can be key to efficiency • sometimes dictated by human expertise • only acceptable if done in a way understandable to experts
Diagnosis • find fault that causes system to malfunction • example: diagnosis of a copier • terminology: • complaint/symptom, hypothesis, differential, finding(s)/evidence, fault • nature of fault varies • state, chain, component • should have some model of system behavior • default method: simple causal model • sometimes reduced to classification task • direct associations between symptoms and faults • automation feasible in technical domains
Diagnosis: causal covering method • Find candidate causes (hypotheses) for the complaint using a causal network • Select a hypothesis • Specify an observable for this hypothesis and obtain its value • Verify each hypothesis to see whether it is consistent with the new finding • Continue this process until a single hypothesis is left or no more observables are available
Diagnosis: method control while new-solution cover(complaint -> hypothesis) do differential := hypothesis add differential; end while repeat select(differential -> hypothesis); specify(hypothesis -> observable); obtain(observable -> finding); evidence := finding add evidence; foreach hypothesis in differential do verify(hypothesis + evidence -> result); if result = false then differential := differential subtract hypothesis until length differential =< 1 or “no observables left” faults := hypothesis;
Diagnosis: method variations • inclusion of abstractions • simulation methods • see literature on model-based diagnosis • library of Benjamins
Monitoring • analyze ongoing process to find out whether it behaves according to expectations • terminology: • parameter, norm, discrepancy, historical data • main features: • dynamic nature of the system • cyclic task execution • output "just" discrepancy => no explanation • often: coupling monitoring and diagnosis • output monitoring is input diagnosis
Monitoring:data-driven method • Starts when new findings are received • For a find a parameter and a norm value is specified • Comparison of the find with the norm generates a difference description • This difference is classified as a discrepancy using data from previous monitoring cycles
Monitoring: method control receive(new-finding); select(new-finding -> parameter) specify(parameter -> norm); compare(norm + finding -> difference); classify(difference + historical-data -> discrepancy); historical-data := finding add historical-data;
Monitoring: method variations • model-driven monitoring • system has the initiative • typically executed at regular points in time • example: software project management • classification function treated as task in its won right • apply classification method • add data abstraction inference
Prediction • analytic task with some synthetic features • analyses current system behavior to construct description of a system state at future point in time. • example: weather forecasting • often sub-task in diagnosis • also found in knowledge-intensive modules of teaching systems e.g. for physics. • inverse: retrodiction: big-bang theory
Synthesis • Given a set of requirements, construct a system description that fulfills these requirements
“Ideal” synthesis method • Operationalize requirements • preferences and constraints • Generate all possible system structures • Select sub-set of valid system structures • obey constraints • Order valid system structures • based on preferences
Design • synthetic task • system to be constructed is physical artifact • example: design of a car • can include creative design of components • creative design is too hard a nut to crack for current knowledge technology • sub-type of design which excludes creative design => configuration design
Configuration design • given predefined components, find assembly that satisfies requirements + obeys constraints • example: configuration of an elevator; or PC • terminology: component, parameter, constraint, preference, requirement (hard & soft) • form of design that is well suited for automation • computationally demanding
Configuration:propose & revise method • Simple basic loop: • Propose a design extension • Verify the new design, • If verification fails, revise the design • Specific domain-knowledge requirements • revise strategies • Method can also be used for other synthetic tasks • assignment with backtracking • skeletal planning
Configuration: method control operationalize(requirements -> hard-reqs + soft-reqs); specify(requirements -> skeletal-design); whilenew-solution propose(skeletal-design + design +soft-reqs -> extension) do design := extension union design; verify(design + hard-reqs -> truth-value + violation); if truth-value = false then critique(violation + design -> action-list); repeat select(action-list -> action); modify(design + action -> design); verify(design + hard-reqs -> truth-value + violation); until truth-value = true; end while
Configuration: method variations • Perform verification plus revision only when for all design elements a value has been proposed. • can have a large impact on the competence of the method • Avoid the use of fix knowledge • Fixes are search heuristics to navigate the potentially extensive space of alternative designs • alternative: chronological backtracking