600 likes | 759 Views
Analyzing Argumentative Discourse Units in Online Interactions. Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus and Matthew Mitsui. First Workshop on Argumentation Mining, ACL June 26, 2014. when we first tried the iPhone it felt natural immediately,. User1.
E N D
Analyzing Argumentative Discourse Units in Online Interactions Debanjan Ghosh, Smaranda Muresan, Nina Wacholder, Mark Aakhus and Matthew Mitsui First Workshop on Argumentation Mining, ACL June 26, 2014
when we first tried the iPhone it felt natural immediately, User1 That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re Feeling empowered and comfortable. Feeling empowered and comfortable. User2 I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. Point of frustration. User3 Argumentative Discourse Units (ADU; Peldszus and Stede, 2013) Segmentation Segment Classification Relation Identification
Annotation Challenges • A complex annotation scheme seems infeasible • The problem of high *cognitive load* (annotators have to read all the threads) • High complexity demands two or more annotators • Use of expert annotators for all tasks is costly
Our Approach: Two-tiered Annotation Scheme • Coarse-grained annotation • Expert annotators (EAs) • Annotate entire thread • Fine-grained annotation • Novice annotators (Turkers) • Annotate only text labeled by EAs
Our Approach: Two-tiered Annotation Scheme • Coarse-grained annotation • Expert annotators (EAs) • Annotate entire thread • Fine-grained annotation • Novice annotators (Turkers) • Annotate only text labeled by EAs
Coarse-grained Expert Annotation Target Post1 Post2 Post2 Post3 Post3 Post4 Callout Pragmatic Argumentation Theory (PAT; Van Eemeren et al., 1993) based annotation
ADUs: Callout and Target • A Calloutis a subsequent action that selects all or some part of a prior action (i.e., Target) and comments on it in some way. • A Targetis a part of a prior action that has been called out by a subsequent action.
Target when we first tried the iPhone it felt natural immediately, User1 That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re Feeling empowered and comfortable. Feeling empowered and comfortable. User2 Callout I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. Point of frustration. User3 Callout
More on Expert Annotations and Corpus • FiveAnnotators were free to choose any text segment to represent an ADU • Four blogs and their first one-hundred comment sections are used as our argumentative corpus • Android (iPhone vs. Android phones) • iPad (usability of iPad as a tablet) • Twitter (use of Twitter as a micro-blog platform) • Job Layoffs (layoffs and outsourcing)
Inter Annotator Agreement (IAA) for Expert Annotations • P/R/F1 based IAA (Wiebe et al., 2005) • exact match (EM) • overlap match (OM) • Krippendorff’s (Krippendorff, 2004)
Issues • Different IAA metrics have different outcome • It is difficult to infer from IAA that what segments of the text are easier or harder to annotate
Our solution: Hierarchical Clustering We utilize a hierarchical clustering technique to cluster ADUs that are variant of a same Callout • Clusters with 5 and 4 annotators shows Callouts that are plausibly easier to identify • Clusters selected by only one or two annotators are harder to identify
Motivation for a finer-grained annotation • What is the nature of the relation between a Callout and a Target? • Can we identify finer-grained ADUs in a Callout?
Our Approach: Two-tiered Annotation Scheme • Coarse-grained annotation • Expert annotators (EAs) • Annotate entire thread • Fine-grained annotation • Novice annotators (Turkers) • Annotate only text labeled by EAs
Novice Annotation: task 1 T T CO CO T T Agree/Disagree/Other CO CO This is related to annotation of agreement/disagreement (Misra and Walker, 2013; Andreas et al., 2012) identification research.
Target when we first tried the iPhone it felt natural immediately, User1 That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re Feeling empowered and comfortable. Feeling empowered and comfortable. User2 Callout I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. Point of frustration. User3 Callout
More from Agree/DisagreeRelation Label • For each Target/Callout pair we employed five Turkers • Fleiss’ Kappa shows moderate agreement between the Turkers • 143 Agree/153 Disagree/50 Other data instance • We run preliminary experiments for predicting the relation label (rule based, BoW, Lexical Features…) • Best results (F1): 66.9% (Agree) 62.9% (Disagree)
Novice Annotation: task 2 T S R CO 2: Identifying Stance vs. Rationale Difficulty This is related to identification of justification task (Biran and Rambow, 2011)
That’s very true. With the iPhone, the sweet goodness part of The UI is immediately apparent. After a minute or two, you’re That’s very true Feeling empowered and comfortable. User2 I disagree that the iPhone just “felt natural immediately” I disagree that the iPhone just “felt natural immediately”… in my Opinion it feels restrictive and over simplified, sometimes to the Point of frustration. User3 Stance Rationale
Examples of Callout/Target pairs with difficulty level (majority voting)
Conclusion • We propose a two-tiered annotation scheme for argument annotation for online discussion forums • Expert annotators detect Callout/Target pairs where crowdsourcing is employed to discover finer units like Stance/Rationale • Our study also assists in detecting the text that is easy/hard to annotate • Preliminary experiments to predict agreement/disagreement among ADUs
Future Work • Qualitative analysis of the Callout phenomenon to process finer-grained analysis • Study the different use of the ADUs on different situations • Annotation on different domain (e.g. healthcare forums) and adjust our annotation scheme • Predictive modeling of Stance/Rationale phenomenon
Example from the discussion thread User2 User3 Stance Rationale
Predicting the Agree/DisagreeRelation Label • Training data (143 Agree/153 Disagree) • Salient Features for the experiments • Baseline: rule based (`agree’, `disagree’) • Mutual Information (MI): MI is used to select words to represent each category • LexFeat: Lexical features based on sentiment lexicons (Hu and Liu, 2004), lexical overlaps, initial words of the Callouts… • 10-fold CV using SVM
Predicting the Agree/Disagree Relation Label (preliminary result) • Lexical features result in F1 score between 60-70% for Agree/Disagree relations • Ablation tests show initial words of the Callout is the strongest feature • Rule-based system show very low recall (7%), which indicates a lot of Target-Callout relations are *implicit* • Limitation – lack of data (in process of annotating more data currently…)
# of Clusters for each Corpus • Clusters with 5 and 4 annotators shows Callouts that are plausibly easier to identify • Clusters selected by only one or two annotators are harder to identify
Target User1 Callout1 User2 Callout2 User3
Target User1 Callout1 User2 Callout2 User3
Fine-GrainedNovice Annotation T T E.g., Relation Identification CO CO E.g., Agree/Disagree/Other T T Finer-Grained Annotation CO CO E.g., Stance &Rationale
Motivation and Challenges Post1 Segmentation Segment Classification Relation Identification Post2 Post3 Post4 Argumentative Discourse Units (ADU; Peldszus and Stede, 2013)
Why we propose a two-layer annotation? • A two-layer annotation schema • Expert Annotation • Five annotators who received extensive training for the task • Primary task includes selecting discourse units from user’ posts (argumentative discourse units: ADU) • Peldszus and Stede (2013 • Novice Annotation • Use of Amazon Mechanical Turk (AMT) platform to detect the nature and role of the ADUs selected by the experts
Annotation Schema for Expert Annotators • Call Out • A Callout is a subsequent action that selects all or some part of a prior action (i.e., Target) and comments on it in some way. • Target • A Target is a part of a prior action that has been called out by a subsequent action
Motivation and Challenges • User generated conversational data provides a wealth of naturally generated arguments • Argument mining of such online interactions, however, is still in its infancy…
Detail on Corpora • Four blog posts and the responses (e.g. first 100 comments) from Technorati between 2008-2010. • We selected blog postings in the general topic of technology, which contain many disputes and arguments. • Together they are denoted as – argumentative corpus
Motivation and Challenges (cont.) • A detailed single annotation scheme seems infeasible • The problem of high *cognitive load* (e.g. annotators have to read all the threads) • Use of expert annotators for all tasks is costly • We propose a scalable and principled two-tier scheme to annotate corpora for arguments
Annotation Schema(s) • A two-layer annotation schema • Expert Annotation • Five annotators who received extensive training for the task • Primary task includes a) segmentation, b) segment classification, and c) relation identification lecting discourse units from user’ posts (argumentative discourse units: ADU) • Novice Annotation • Use of Amazon Mechanical Turk (AMT) platform to detect the nature and role of the ADUs selected by the experts
Motivation and Challenges • Segmentation • Segment Classification • Relation Identification • Argument annotation includes three tasks (Peldszus and Stede, 2013)
Summary of the Annotation Schema(s) • First stage of annotation • Annotators: expert (trained) annotators • A coarse-grained annotation scheme inspired by Pragmatic Argumentation Theory (PAT; Van Eemeren et al., 1993) • Segment, label, and link Callout and Target • Second stage of annotation • Annotators: novice (crowd) annotators • A finer-grained annotation to detect Stance and Rationale of an argument
Expert Annotation Expert Annotators • Segmentation • Labeling • Linking • Peldszus and Stede(2013) • Five Expert (trained) annotators detect two types of ADUs • ADU: Callout and Target Coarse-grained annotation
The Argumentative Corpus 2 1 4 3 Blogs and comments extracted from Technorati (2008-2010)
Novice Annotations: Identifying Stance and Rationale Crowdsourcing • Identify the task-difficulty (very difficult….very easy) • Identify the text segments (Stance and Rationale)
Novice Annotations: Identifying the relation between ADUs Crowdsourcing
More on Expert Annotations • Annotators were free to chose any text segment to represent an ADU Splitters Lumpers
Novice Annotation: task 1 1: Identifying the relation (agree/disagree/other) This is related to annotation of agreement/disagreement (Misra and Walker, 2013; Andreas et al., 2012) and classification of stances (Somasundaran and Wiebe, 2010) in online forums.