430 likes | 521 Views
User-Initiated Learning (UIL). Kshitij Judah, Tom Dietterich, Alan Fern, Jed Irvine, Michael Slater, Prasad Tadepalli, Oliver Brdiczka, Jim Thornton, Jim Blythe, Christopher Ellwood, Melinda Gervasio, Bill Jarrold. CALO: Intelligent Assistant for the Desktop Knowledge Worker.
E N D
User-Initiated Learning (UIL) Kshitij Judah, Tom Dietterich, Alan Fern, Jed Irvine, Michael Slater, Prasad Tadepalli, Oliver Brdiczka, Jim Thornton, Jim Blythe, Christopher Ellwood, Melinda Gervasio, Bill Jarrold
CALO: Intelligent Assistant for the Desktop Knowledge Worker Learn to Understand Meetings Learn to Keep User Organized Learn to Manage Email Learn to Prepare information Products Learn to Schedule and Execute CALO: Learning to be an Intelligent Assistant PAL Program Focus: “Learning in the Wild”
User-Initiated Learning • All of CALO’s learning components can perform Learning In The Wild (LITW) • But the learning tasks are all pre-defined by CALO’s engineers: • What to learn • What information is relevant for learning • How to acquire training examples • How to apply the learned knowledge • UIL Goal: Make it possible for the user to define new learning tasks after the system is deployed
Motivating Scenario:Forgetting to Set Sensitivity Scientist T I M E L I N E Collaborates on a Classified project Research Team Sends email to team Sets sensitivity to confidential “Lunch today?” Sends email to a colleague Does not set sensitivity to confidential Sends email to team Forgets to set sensitivity to confidential
Motivating Scenario:Forgetting to Set Sensitivity Scientist T I M E L I N E “Please do not forget to set sensitivity when sending email” Research Team Teaches CALO to learn to predict whether user has forgot to set sensitivity Sends email to team CALO reminds user to set sensitivity
User-CALO Interaction: Teaching CALO to Predict Sensitivity Procedure Demonstration and Learning Task Creation Integrated Task Learning Instrumented Outlook Events Compose new email Modify Procedure user user SPARK Procedure Feature Guidance Learning User Interface for Feature Guidance SAT Based Reasoning System Machine Learner Legal Features User Selected Features Trained Classifier Class Labels Training Examples Feature Guidance Email + Related Objects Knowledge Base SAT Based Reasoning System user CALO Ontology
User-CALO Interaction: Teaching CALO to Predict Sensitivity Procedure Demonstration and Learning Task Creation Integrated Task Learning Instrumented Outlook Events Compose new email Modify Procedure user user SPARK Procedure Feature Guidance Learning User Interface for Feature Guidance SAT Based Reasoning System Machine Learner Legal Features User Selected Features Trained Classifier Class Labels Training Examples Feature Guidance Email + Related Objects Knowledge Base SAT Based Reasoning System user CALO Ontology
Initiating Learning via Demonstration • LAPDOG: Transforms an observed sequence of instrumented events into a SPARK procedure SPARK representation generalizes the dataflow between the actions of the workflow
Initiating Learning via Demonstration • TAILOR: Supports procedure editing • For UIL, it allows adding a condition to one or more steps in a procedure
Initiating Learning via Demonstration • The condition becomes the new predicate to be learned
User-CALO Interaction: Teaching CALO to Predict Sensitivity Procedure Demonstration and Learning Task Creation Integrated Task Learning Instrumented Outlook Events Compose new email Modify Procedure user user SPARK Procedure Feature Guidance Learning User Interface for Feature Guidance SAT Based Reasoning System Machine Learner Legal Features User Selected Features Trained Classifier Class Labels Training Examples Feature Guidance Email + Related Objects Knowledge Base SAT Based Reasoning System user CALO Ontology
Inferring Feature Legality HasToField HasSubject HasBody HasAttachment HasSensitivity … EmailMessage • Naively, the system will use all features Project ToRecipient CCRecipient PrevEmailMessage FirstName LastName Address Phone … FirstName LastName Address Phone … HasToField HasSubject HasBody HasAttachment HasSensitivity … Description StartDate Enddate … Subset of Ontology • For example, system will use HasSensitivity • Dangerous to use HasSensitivity: • Has one-to-one correlation with target and is present at training time • Not present at test time • Feature filtering removes such features at training time
User-CALO Interaction: Teaching CALO to Predict Sensitivity Procedure Demonstration and Learning Task Creation Integrated Task Learning Instrumented Outlook Events Compose new email Modify Procedure user user SPARK Procedure Feature Guidance Learning User Interface for Feature Guidance SAT Based Reasoning System Machine Learner Legal Features User Selected Features Trained Classifier Class Labels Training Examples Feature Guidance Email + Related Objects Knowledge Base SAT Based Reasoning System user CALO Ontology
Training Instance Generation {defprocedure do_rememberSensitivity .... [do: (openComposeEmailWindow $newEmail)] [do: (changeEmailField $newEmail "to")] [do: (changeEmailField $newEmail "subject")] [do: (changeEmailField $newEmail "body")] [if: (learnBranchPoint $newEmail) [do: (changeEmailField $newEmail "sensitivity")]] [do: (sendEmailInitial $newEmail)] .... } • Goal: autonomously generate labeled training instances for the learning component from stored user emails • Problem: actions used to create emails are not stored in the CALO knowledge base, so we need to infer how email was created • Specifically, we want to know: • Whether an email is an instance of the procedure? • Which branch was taken during creation of the email? • No such inference can be drawn
Training Instance Generation Knowledge Base NewComposition ReplyComposition HasToField HasSubject HasBody HasAttachment … NewComposition ComposeNewMail ReplyComposition ReplyToMail HasAttachment (AttachFile ForwardMail) … Domain Axioms Label Analysis Formula (LAF) SPARK Axioms ProcInstance (u1 u2 … Un) ( forget label) (C1 C2 … Cn) Reasoning Engine {defprocedure do_rememberSensitivity .... [do: (openComposeEmailWindow $newEmail)] [do: (changeEmailField $newEmail "to")] [do: (changeEmailField $newEmail "subject")] [do: (changeEmailField $newEmail "body")] [if: (learnBranchPoint $newEmail) [do: (changeEmailField $newEmail "sensitivity")]] [do: (sendEmailInitial $newEmail)] .... } otherwise E forget╞ (ProcInstance Label) Discard email E forget ╞ (ProcInstance Label) Positive Example Negative Example
The Learning Component • Logistic Regression is used as the core learning algorithm • Features • Relational features extracted from ontology • Incorporate User Advice on Features • Apply large prior variance on user selected features • Select prior variance on rest of the features through cross-validation • Automated Model Selection • Parameters: Prior variance on weights, classification threshold • Technique: Maximization of leave-one-out cross-validation estimate of kappa
Empirical Evaluation • Problems: • Attachment Prediction • Importance Prediction • Learning Configurations Compared: • No User Advice + Fixed Model Parameters • User Advice + Fixed Model Parameters • No User Advice + Automatic parameter Tuning • User Advice + Automatic parameter Tuning • User Advice: 18 keywords in the body text for each problem
Empirical Evaluation:Data Set • Set of 340 emails obtained from a real desktop user • 256 training set + 84 test set • For each training set size, compute mean kappa () using test set to generate learning curves • is a statistical measure of inter-rater agreement for discrete classes • is a common evaluation metric in cases when the classes have a skewed distribution
Empirical Evaluation:Learning Curves Attachment Prediction
Empirical Evaluation:Learning Curves Attachment Prediction
Empirical Evaluation:Learning Curves Attachment Prediction
Empirical Evaluation:Learning Curves Attachment Prediction
Empirical Evaluation:Learning Curves Importance Prediction
Empirical Evaluation:Learning Curves Importance Prediction
Empirical Evaluation:Learning Curves Importance Prediction
Empirical Evaluation:Learning Curves Importance Prediction
Empirical Evaluation:Robustness to Bad Advice • We intended to test the robustness of the system to bad advice • Bad advice was generated as follows: • Use SVM based feature selection in WEKA to produce a ranking of user provided keywords • Replace top three words in the ranking with randomly selected words from the vocabulary
Empirical Evaluation:Robustness to Bad Advice Attachment Prediction
Empirical Evaluation:Robustness to Bad Advice Attachment Prediction
Empirical Evaluation:Robustness to Bad Advice Attachment Prediction
Empirical Evaluation:Robustness to Bad Advice Attachment Prediction
Empirical Evaluation:Robustness to Bad Advice Importance Prediction
Empirical Evaluation:Robustness to Bad Advice Importance Prediction
Empirical Evaluation:Robustness to Bad Advice Importance Prediction
Empirical Evaluation:Robustness to Bad Advice Importance Prediction
Empirical Evaluation:Prediction Utility • We want to evaluate the utility of the system for the user • We use a new metric called Critical Cost Ratio (CCR) • Intuition: A measure of how high cost of forgetting should be compared tocost of interruption for the system to be useful • Intuition : Hence, if CCR is low, the system is useful more often • For example, if CCR=10, then cost of forgetting should be 10 times more than cost of interruption for net benefit
Empirical Evaluation:Prediction Utility At size 256, cost of forgetting should be at least 5 times of cost of interruption to gain net benefit from the system Attachment Prediction
Empirical Evaluation:Prediction Utility Importance Prediction
Lessons Learned • User interfaces should support rich instrumentation, automation, and intervention • User interfaces should come with models of their behavior • User advice is helpful but not critical • Self-tuning learning algorithms are critical for success
Beyond UIL: System-Initiated Learning • CALO should notice when it could help the user by formulating and solving new learning tasks • Additional Requirements • Knowledge of user’s goals, user’s costs, user’s failure modes (e.g., forgetting, over-committing, typos) • Knowledge of what is likely to be learnable and what is not • Knowledge of how to formulate learning problems (classification, prediction, anomaly detection, etc.)