370 likes | 471 Views
Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model. Elijah Mayfield Computational Models of Discourse February 9, 2011. Outline. Goal of Negotiation Framework Comparison to other NLP tasks Our coding scheme for Negotiation Computational modeling
E N D
Recognizing Authority in Dialogue with an Integer Linear Programming Constrained Model Elijah Mayfield Computational Models of Discourse February 9, 2011
Outline • Goal of Negotiation Framework • Comparison to other NLP tasks • Our coding scheme for Negotiation • Computational modeling • Results and Conclusion
Goal • How can we measure speakers positioning themselves as information givers/receivers in a discourse? • Several related questions: • Initiative/Control • Speaker Certainty • Dialogue Acts
Initiative and Control • Tightly related concepts from turn-taking research • Conveys who is being addressed and who is starting discourse segments • Does not account for authority over content, just over discourse structure
Speaker Certainty • Measures a speaker’s confidence in what they are talking about • Evaluates self-evaluation of knowledge and authority over content • Does not model interaction between speakers
Dialogue Acts • Separates utterances out into multiple categories based on discourse function • Covers concepts from both content of the utterance and discourse structure • Overly general and difficult to separate into high/low authority tags
The Negotiation Framework • Labels moves in dialogue based on: • Authority (primary vs. secondary) • Focus (action vs. knowledge) • Interactions over time (delays and followups) • We must maintain as much insight as possible from Negotiation while making these analyses fully automatic.
The Negotiation Framework • In the original framework, lines of dialogue can be marked as:
The Negotiation Framework • With these codes, dialogue can be examined at a very fine-grained level…
The Negotiation Framework • But these codes are always applied by the researcher’s intuition. • Many interpretations exist, depending on the context and researcher’s goals. • Quantitative measures of reproducibility between analysts is not highly valued.
Computationalizing Negotiation • We developed a consistent coding manual for a pared-down Negotiation. • Consulted with sociocultural researchers, education researchers, sociolinguists, computational linguistics, computer scientists, interaction analysts, learning scientists, etc. • Also consulted with James Martin, the researcher most associated with this framework.
Computationalizing Negotiation • Our system has six codes:
Computationalizing Negotiation • These codes are more complex than “equivalent” surface structures such as statement/question/command:
Computationalizing Negotiation • Our coding also has a notion of sequences in discourse.
Computationalizing Negotiation • Thus our simplified model goes from over twenty codes to six • In parallel is a binary “same-new” segmentation problem at each line. • Inter-rater reliability for coding this by hand reached kappa above 0.7.
Results from Manual Coding • We first checked to see whether our simplified coding scheme is useful. • Defined Authoritativeness Ratio as: • Looked for correlation with other factors. K1 + A2 K1 + K2 + A1 + A2
Results from Manual Coding • First test: Cyber-bullying • Corpus: 36 conversations, each between two sixth-grade students
Results from Manual Coding • First test: Cyber-bullying • Corpus: 36 conversations, each between two sixth-grade students • 18 pairs of students, each with two conversations over two days. • Result: • Bullies are more authoritative than non-bullies. (p < .05) • Non-bullies become less authoritative over time. (p < .05)
Results from Manual Coding • Second Test: Collaborative Learning • 54 conversations, each between 2 sophomore Engineering undergraduates. • Results: • Authoritativeness is correlated with learning gains from tutoring (r2 = 0.41, p < .05) • Authoritativeness has a significant interaction with self-efficacy (r2 = 0.12, p < .01)
Results from Manual Coding • We have evidence that our coding scheme tells us something useful. • Now, can we automate it?
Computational Modeling • 20 dialogues coded from MapTask corpus
Computational Modeling • Baseline model: Bag-of-words SVM • Advanced model adds features: • Bigrams & Part-of-Speech Bigrams • Cosine similarity with previous utterance • Previous utterance label (on-line prediction) • Separate segmentation models for short (1-3 words) and long (4+ word) utterances
Computational Modeling • At each line of dialogue, we must select a label from: {K1, K2, A1, A2, o, ch} • We can also build a segmentation model to select from {new, same} • But how does this segmentation affect the classification task?
Constraint-Based Approach • Remember that our coding has been segmented into sequences based on rules in the coding manual • We can impose these expectations on our model’s output through Integer Linear Programming.
Constraint-Based Approach • We now jointly optimize the assignment of labels and segmentation boundaries. • When the most likely label is overruled, the model must choose to: • Back off to most likely allowed label, or • Start a new sequence, based on segmentation classifier.
Constraint-Based Approach • We use a toolkit that allows us to define constraints as boolean statements. • These constraints define things that must be true in a correctly labeled sequence. • These correspond to rules defined in our human coding manual.
Constraint-Based Approach • Constraints: • In a sequence, a primary move cannot occur before a secondary move. Key: ui : The ith utterance in the dialogue. s: The sequence containing ui uil : The label assigned to ui uis : The speaker of ui
Constraint-Based Approach • Constraints: • In a sequence, action moves and knowledge moves cannot both occur Key: ui : The ith utterance in the dialogue. s: The sequence containing ui uil : The label assigned to ui uis : The speaker of ui
Constraint-Based Approach • Constraints: • Non-contiguous primary moves cannot occur in the same sequence. Key: ui : The ith utterance in the dialogue. s: The sequence containing ui uil : The label assigned to ui uis : The speaker of ui
Constraint-Based Approach • Constraints: • Speakers cannot answer their own questions or follow their own commands. Key: ui : The ith utterance in the dialogue. s: The sequence containing ui uil : The label assigned to ui uis : The speaker of ui
Experiments • We measure our performance using three metrics: • Accuracy – % of correctly predicted labels • Kappa – Accuracy improvement over chance agreement • Ratio Prediction r2 – How well our model predicts speaker Authoritativeness Ratio. • All results given are from 20-fold leave-one-conversation-out cross-validation
Experiments Accuracy Improved, p < 0.009 Correlation Improved, p < 0.0003
Experiments Accuracy Improved, p < 0.0001 Correlation Improved, p < 0.0001
Experiments Accuracy Improved, p < 0.005 Correlation Improved, p < 0.0001
Error Analysis • Biggest source of error is o vs. not-o • Is there any content at all in the utterance? • High accuracy between 4 codes if “content” is identified, though • A2-A1 often looks identical to K1-o.
Conclusion • We’ve formulated the Negotiation framework in a reliable way. • Machine learning models can reproduce this coding highly accurately. • Local context and structure, enforced through ILP, help in this classification.