170 likes | 251 Views
Software Agents: Completing Patterns and Constructing User Interfaces Jeffrey C. Schlimmer Leonard A. Hermens School of Electrical Engineering & Computer Science, Washington State University, Pullman Presented by Marko Puljic Motivation Learning Prediction. Introduction
E N D
Software Agents: Completing Patterns and Constructing User Interfaces • Jeffrey C. Schlimmer • Leonard A. Hermens • School of Electrical Engineering & Computer Science, • Washington State University, Pullman • Presented by Marko Puljic • Motivation • Learning • Prediction
Introduction • Note-Taking system: • Actively predicts what the user is going to write based on the previously taken notes. • Motivation for Building Note-Taking System • People like to record the information for the retrieval, (fast data structures and algorithms) • Speed up of information entry and reduction of errors • Physical storage is excessive, duplication and distribution is inexpensive, ( due to high density devices and high speed networks)
Motivation in General for the Pattern Recognition problem `Given some examples of complex signals and the correct decisions for them, make decisions automatically for a stream of future examples.‘ Examples: identifying fingerprints, highlighting potential tumors on a mammogram, handwriting recognition, visual inspection of manufactured products for quality control, speech recognition,
User’s Perspective + software has to improve the speed and accuracy as the user enters notes about various domains of interest + agent continuously predicts a likely completion as the user writes + there is small completion button which color ranges in saturation form 1 (green) when the agent is confident to 0 (white) when the agent has no confidence.
Agent • learns to assist the users by watching them complete tasks • Helps to capture and organize the information • Goal and Drives • Predict the input that will be given by user • Learn the pattern • Prompt the prediction based on the input • Environment • hardware and user’s input string • Perception • string • Behind the interface, the software is acting on behalf of the user, helping to capture and organize the information
Learning a Syntax + to characterize the syntax agent learns finite-state machines (FSMs) + to generate predictions, agent learns decision tree classifiers situated at states within the FSMs. + FSMs are well understood and relatively expressive – Angluin (1982) and Berwick and Pilato (1987) present a straightforward algorithm for learning a specific subclass of FSMs called k-reversible FSMs.
Learning (tokenization, merging, classifiers) Tokenization Example: 4096 K PowerBook 170, 1.4MB and 120MB Int. Drives, FPU, 2400/9600 Baud :NULL “4096” “ K” “ PowerBook” “170, “ . .
notes merged with classifiers 1 2 3 4 5 6 7
Learning Embedded Classifiers + A classifier is the state in the FSM, which has more than one transition. + The classifier gives advice about which transition to take or whether to terminate. It is necessary to decide whether to: 1) terminate or 2) continue prediction, and then which transition to predict + The classifiers are updated incrementally after the user finishes each note. + classifier predicts based on previous transitions and the frequency of current state’s transitions.
Learning Embedded Classifiers Decision tree is embedded in the classifier, (examples): Decision tree embedded in state 3: If state 1 exited with “2048” Then predict “ 20” Else if with “4096” Then predict “ 40” Else if with “6144” Then predict “ 40” Else if with “8192” Then predict “ 40” Decision tree embedded in state 7: If state 7 has not been visited Then predict “ FAX” Else if state 7 exited with “ Fax” Then predict “ Modem”
Parsing (how to predict) e.g. Sequence {:NULL, “12288”, “K”, “PB”} + identify the state requiring a minimum number of insertions, omissions, and replacements necessary to parse the new sequence: “12288” is novel token, “K” is OK, “”PowerBook” is replaced by “PB” + the initial state had a transition for the first token. + state 1 doesn’t have a transition for the next token “12288”, so greedy search is started to find a state that accepts either “12288”, “K”, “PB”. The state before state 2 accepts “K”. + another greedy search starts from state with “K” that accepts “PB”. “PB” cannot be found, so parsing assumes that it should skip to the next transition “PowerBook”. + system generates a prediction from state 2 to prompt the user.
notes merged with classifiers 1 2 3 4 5 6 7
Contextual Prompting • + Calculation to compute the confidence: • F(prediction) • F(totoal) X (1+skipped) • F(prediction) – frequency of the predicted arc, (the number of times this choice was taken while parsing previously observed notes) • F(totoal) – total frequency of all arcs (and terminate) • Skipped – number of tokens skipped during parsing. • Stopping Criterion • Next prediction is to terminate • At least one token has been predicted and the confidence of prediction is lower • Next prediction is same as last prediction • More than 10 tokens have already been predicted
Multiple FSMs + may be necessary to learn a separate syntax for each domain – problem of deciding which notes should be clustered together to share a FSM + tactic: a new note is grouped with the FSM that skips the fewest of its tokens. Example: A new FSM is constructed only if all other FSMs skip more than half of the new note’s tokens.