620 likes | 706 Views
Hierarchical Text Categorization and its Application to Bioinformatics. Stan Matwin and Svetlana Kiritchenko joint work with Fazel Famili (NRC), and Richard Nock (Université Antilles-Guyane) School of Information Technology and Engineering University of Ottawa. Outline.
E N D
Hierarchical Text Categorizationand its Application to Bioinformatics Stan Matwin and Svetlana Kiritchenko joint work with Fazel Famili (NRC), and Richard Nock (Université Antilles-Guyane) School of Information Technology and Engineering University of Ottawa
Outline • What is hierarchical text categorization (HTC) • Functional gene annotation requires HTC • Ensemble-based learning and AdaBoost • Multi-class multi-label AdaBoost • Generalized local hierarchical learning method • New global hierarchical learning algorithm • New hierarchical evaluation measure • Application to Bioinformatics
Text categorization • Given: dj D - textual documents C = {c1, …, c|C|} – predefined categories • Task: <dj, ci> DC {True, False} c1 c2 TC c3 c7 c6 c5 c4
Hierarchical text categorization • Hierarchy of categories: ≤ CC - reflexive, anti-symmetric, transitive binary relation on C c1 HTC c2 c3 c5 c6 c7 c4
Advantages of HTC • Additional, potentially valuable information • Relationships between categories • Flexibility • High levels: general topics • Low levels: more detail
Outline • What is hierarchical text categorization (HTC) • Functional gene annotation requires HTC • Ensemble-based learning and AdaBoost • Multi-class multi-label AdaBoost • Generalized local hierarchical learning method • New global hierarchical learning algorithm • New hierarchical evaluation measure • Application to Bioinformatics
Text classification and bioinformatics • Clustering and classification of gene expression data • DNA chip time series – performance data • Gene function, process,… – genetic knowledge - GO • Literature will connect the two - domain knowledge • Validation of results from performance data
From data to knowledge via literature • Functional annotation of genes from biomedical literature
Other applications • Web directories • Digital libraries • Patent databases • Biological ontologies • Email folders
Outline • What is hierarchical text categorization (HTC) • Functional gene annotation requires HTC • Ensemble-based learning and AdaBoost • Multi-class multi-label AdaBoost • Generalized local hierarchical learning method • New global hierarchical learning algorithm • New hierarchical evaluation measure • Application to Bioinformatics
Boosting • not a learning technique on its own, but a method in which a family of “weakly” learning agents (simple learners) is used for learning • based on the fact that multiple classifiers that disagree with one another can be together more accurate than its component classifiers • if there are L classifiers, each with an error rate < 1/2, and the errors are independent, then the prob. that the majority vote is wrong is the area under binomial distribution for more than L/2 hypotheses
Boosting – the very idea • Train an ensemble of classifiers, sequentially • Each next classifier focuses more on the training instances on which the previous one has made a mistake • The “focusing” is done thru the weighting of the training instances • To classify a new instance, make the ensemble vote
Boosting - properties • If each hl is only better than chances, boosting can attain ANY accuracy!! • No need for new examples, additional knowledge, etc • Original AdaBoost is on single-labeled data
Outline • What is hierarchical text categorization (HTC) • Functional gene annotation requires HTC • Ensemble-based learning and AdaBoost • Multi-class multi-label AdaBoost • Generalized local hierarchical learning method • New global hierarchical learning algorithm • New hierarchical evaluation measure • Application to Bioinformatics
AdaBoost.MH [Schapire and Singer, 1999] • (di, Ci) ((di, l), Ci[l]), l C • Initialize distribution P1(i,l) = 1/(mk) . • For t = 1, …, T: • Train weak learner using distribution Pt. • Get weak hypothesis ht: DC . • Update: • The final hypothesis:
BoosTexter [Schapire and Singer, 2000] • “Weak” learner: decision stump word w occurs doesn’t occur
Thresholds for AdaBoost • AdaBoost often underestimates its confidences • 3 approaches to selecting better thresholds • single threshold for all classes • individual thresholds for each class • separate thresholds for each subtree rooted in the children of a top node (for tree-hierarchies only)
Outline • What is hierarchical text categorization (HTC) • Functional gene annotation requires HTC • Ensemble-based learning and AdaBoost • Multi-class multi-label AdaBoost • Generalized local hierarchical learning method • New global hierarchical learning algorithm • New hierarchical evaluation measure • Application to Bioinformatics
Hierarchical consistency • if (dj, ci) True, then (dj, Ancestor(ci)) True c1 c1 c2 c2 c3 c3 c5 c5 c4 c6 c7 c4 c6 c7 consistent inconsistent
Hierarchical local approach c1 c2 c3 c5 c4 c6 c7 c8 c9
Hierarchical local approach c1 c2 c3 c5 c4 c6 c7 c8 c9
Hierarchical local approach c1 c2 c3 c5 c4 c6 c7 c8 c9
Hierarchical local approach c1 c2 c3 c5 c4 c6 c7 c8 c9
Hierarchical local approach c1 c2 c3 c5 c4 c6 c7 c8 c9 consistent classification
Generalized hierarchical local approach • stop classification at an intermediate level if none of the children categories seem relevant • a category node can be assigned only after all its parent nodes have been assigned c1 c2 c3 c5 c4 c6 c7 c8 c9
Outline • What is hierarchical text categorization (HTC) • Functional gene annotation requires HTC • Ensemble-based learning and AdaBoost • Multi-class multi-label AdaBoost • Generalized local hierarchical learning method • New global hierarchical learning algorithm • New hierarchical evaluation measure • Application to Bioinformatics
New global hierarchical approach • Make a dataset consistent with a class hierarchy • add ancestor category labels • Apply a regular learning algorithm • AdaBoost • Make prediction results consistent with a class hierarchy • for inconsistent labeling make a consistent decision based on confidences of all ancestor classes
New global hierarchical approach • Hierarchical (shared) attributes sports team, game, winner, etc. hockey NHL, Senators, goalkeeper, etc. football Super Bowl, Patriots, touchdown, etc.
Outline • What is hierarchical text categorization (HTC) • Functional gene annotation requires HTC • Ensemble-based learning and AdaBoost • Multi-class multi-label AdaBoost • Generalized local hierarchical learning method • New global hierarchical learning algorithm • New hierarchical evaluation measure • Application to Bioinformatics
Evaluation in TC c1 Correct category c2 c3 Incorrect category c5 c6 c7 c4
c1 c1 c1 c2 c3 c2 c3 c2 c3 c5 c4 c6 c7 c5 c6 c7 c5 c6 c7 c4 c4 Weaknesses of standard measures H1 H2 H3 P(H1) = P(H2) = P(H3) R(H1) = R(H2) = R(H3) F(H1) = F(H2) = F(H3) Ideally, M(H1) > M(H3) and M(H2) > M(H3)
Requirements for a hierarchical measure 1. to give credit to partially correct classification c1 c1 H1 H2 c2 c2 c3 c3 c5 c5 c4 c4 c6 c7 c6 c7 c8 c9 c10 c11 c8 c9 c10 c11 M(H1) > M(H2)
Requirements for a hierarchical measure 2. to punish distant errors more heavily: • to give higher evaluation for correctly classifying one level down comparing to staying at the parent node c1 c1 H1 H2 c2 c2 c3 c3 c5 c5 c4 c4 c6 c7 c6 c7 c8 c9 c10 c11 c8 c9 c10 c11 M(H1) > M(H2)
Requirements for a hierarchical measure 2. to punish distant errors more heavily: • gives lower evaluation for incorrectly classifying one level down comparing to staying at the parent node c1 c1 H1 H2 c2 c2 c3 c3 c5 c5 c4 c4 c6 c7 c6 c7 c8 c9 c10 c11 c8 c9 c10 c11 M(H1) > M(H2)
Requirements for a hierarchical measure 3. to punish errors at higher levels of a hierarchy more heavily c1 c1 H1 H2 c2 c2 c3 c3 c5 c5 c4 c4 c6 c7 c6 c7 c8 c9 c10 c11 c8 c9 c10 c11 M(H1) > M(H2)
Advantages of the new measure • Simple, straight-forward to calculate • Based solely on a given hierarchy (no parameters to tune) • Satisfies all three requirements • Has much discriminating power • Allows to trade off between classification precision and classification depth
Our new hierarchical measure Correct category + all its ancestors (excluding root) c1 Correct category c2 c3 Incorrect category c5 c6 c7 c4
c1 c1 c1 c2 c3 c2 c3 c2 c3 c5 c4 c6 c7 c5 c6 c7 c5 c6 c7 c4 c4 Our new hierarchical measure H1 H2 H3 correct: {c4} {c2, c4} predicted: {c2} {c2} {c4} {c2, c4} {c5} {c2, c5} {c4} {c2, c4} {c7} {c3, c7}
Measure consistency • Definition [Huang & Ling, 2005]: f, g – measures on domain R = {(a,b)|a,b , f(a)>f(b), g(a)>g(b)} S = {(a,b)|a,b , f(a)>f(b), g(a)<g(b)} f is statistically consistent with g if |R|>|S| • Experiment: • 100 randomly chosen hierarchies • New hierarchical F-measure and standard accuracy were consistent on 85% of random classifiers (|R|>5|S|)
c1 c1 c1 c2 c3 c2 c3 c2 c3 c5 c4 c6 c7 c5 c6 c7 c5 c6 c7 c4 c4 Measure discriminancy • Definition [Huang & Ling, 2005]: f, g – measures on domain P = {(a,b)|a,b , f(a)>f(b), g(a)=g(b)} Q = {(a,b)|a,b , f(a)=f(b), g(a)>g(b)} f is statistically more discriminating than g if |P|>|Q| • Examples: H1 H2 H3 For one accuracy value - 3 different hierarchical values
Results: Hierarchical vs. Flat Synthetic data (hierarchical attributes)
Results: Hierarchical vs. Flat Synthetic data (no hierarchical attributes)
Results: Hierarchical vs. Flat Real data
Results: Hierarchical vs. Local Synthetic data (hierarchical attributes)
Results: Hierarchical vs. Local Synthetic data (no hierarchical attributes)
Results: Hierarchical vs. Local Real data