510 likes | 655 Views
Redundant Feature Elimination for Multi-Class Problems. Annalisa Appice, Michelangelo Ceci Dipartimento di Informatica, Università degli Studi di Bari, Italy Simon Rawles, Peter Flach Department of Computer Science, University of Bristol, UK. Re dundant fe ature r eduction.
E N D
Redundant Feature Elimination for Multi-Class Problems Annalisa Appice, Michelangelo Ceci Dipartimento di Informatica, Università degli Studi di Bari, Italy Simon Rawles, Peter Flach Department of Computer Science, University of Bristol, UK
Redundant feature reduction • REFER: an efficient, scalable, logic-based method for eliminating Boolean features which are redundant for multi-class classifier learning. • Why? Size of hypothesis space, predictive performance, model comprehensibility. • Distinct from feature selection.
Overview of this talk • Redundant feature reduction • What is feature redundancy? • Doing multi-class reduction • Related approaches • Theoretical and experimental results • Summary • Current and future work
Example: Redundancy of features A fixed number of Boolean features One of several class labels (‘multiclass’)
Discriminating a against b True values in examples of class a make the feature better for distinguishing a from b in a classification rule.
Discriminating a against b False values in examples of class b make the feature better for distinguishing a from b in a rule.
Discriminating a against b f2covers f1 and f3 is useless. f1 and f3 are redundant. Negated features are not automatically considered.
For discriminating class a examples from class b, f covers g if Ta(g) Ta(f) and Fb(g) Fb(f). A feature is redundant if another feature covers it. More formally... Ta(f2) = {e1, e2}. Ta(f1) = {e1}. Fb(f2) = {e4, e5}. Fb(f1) = {e5}. a is the ‘positive class’ here
Neighbourhoods of examples • A way to upgrade to multi-class data. • Each class is partitioned into subsets of similar examples. • REFER-N finds non-redundant features between each neighbourhood pair in turn. • Builds up list of non-redundant features between each neighbourhood pair in turn. • Efficient, more reduction, logic-based.
Neighbourhood construction 1 1 1 1
Neighbourhood construction 3 3 3 3 3
Neighbourhood construction Groups of similar examples with the same class label 1 1 1 1 2 5 2 2 3 5 5 3 3 3 3 4
1 2 5 3 4 Neighbourhood comparison
1 2 5 3 4 Neighbourhood comparison
1 2 5 3 4 Neighbourhood comparison Comparing all neighbourhoods of differing class
Ancestry of REFER • REDUCE (Lavrač et al. 1999) • Feature reduction for propositionalised ILP datasets • Preserves learnability of a complete and consistent hypothesis • REFER uses a variant of REDUCE • Redundant features found between the examples in each neighbourhood pair • Prefers features already found non-redundant
Related multiclass filters • FOCUS for noise-free Boolean data (Almuallim & Dietterich 1991) • Exhaustive evaluation of all subsets • A time complexity of O(np) • SCRAP relevance filter (Raman 2003) • Also uses neighbourhood approach • No guarantee that selected features (still) discriminate among all classes.
Theoretical results • REFER preserves the learnability of a complete and consistent theory. • If a C&C rule was in the original data, it’ll be in the reduced data. • REFER is efficient. Time complexity is • … linear in number of examples • … quadratic in number of features
Experimental results • Mutagenesis data from SINUS • Feature set greatly reduced (13118 44) • Accuracy still competitive (approx. 85%)
Experimental results • Thirteen UCI benchmark datasets • Compared with LVF, CFS and Relief using discrete/discretised data • Generally conservative • Faster: 8 out of 13 faster, 3 very close. • Competitive predictive accuracy using several classifiers:
Experimental results • Reuters-21578 large-scale high-dimensionality sparse data • 16,582 preprocessed features were reduced to 1450. • REFER supports parallel execution well. • REFER runs in parallel on subsets of the feature set and again on the combination.
Summary • A method for eliminating redundant Boolean features for multi-class classification tasks. • Uses logical coverage of examples • Efficient and scalable • requiring less time than the three feature selection algorithms we used • Amenable to parallel execution
Current and future investigations • Interaction between feature selection and feature reduction • Benefits of combination • Noise handling using non-pure neighbourhoods (‘relaxed REFER’) • Overcoming sensitivity to noise • REFER for example reduction
Aud Aud Brid Brid Car Car F1C F1C F1M F1M F3C F3C F3M F3M Mus Mus Nur Nur Post Post Tic Tic Pim Pim Yea Yea Effect of choice of starting point Number of reduced features 120 100 80 60 40 20 0 Number of neighbourhoods constructed 1000 800 600 400 200 0
Comparison of running times Machine spec: Pentium IV 1.4GHz PC running Windows XP
Setting M1 M2 M3 M4 Instances produced 1692 1692 1692 1692 Features produced 1016 2114 3986 13118 S parameters (L, V, T) 3, 3, 20 3, 3, 20 3, 3, 20 4, 4, 20 INUS inda and ind1 yes yes yes yes bonds yes yes yes yes atom element and type yes yes yes yes atom charge no yes yes yes lumo and logp no yes yes yes 2D molecular stru c tures yes no yes yes REFER for propositionalisation
a) E1 c1 E c1 e1 c1 c1 c1 c1 c1 E5 c2 c2 c1 c2 E2 c3 c2 c3 c3 E4 c3 c3 E3 c3 b) E5 c1 E1 c1 c2 c2 E4 c3 E2 E3 Neighbourhoods of examples R2 analogy of neighbourhood construction Comparison between neighbourhood pairs
Another simple example f2 is a useless feature - any feature can cover it.
Introducing negated features … but its negation is a perfectly non-redundant feature. REFER assumes that the user will provide negated features if the language for rules requires it.
Introducing negated features If all features are considered together, f2 is chosen ...
Introducing negated features … but REFER considers positive against positive and negative against negative only.