370 likes | 384 Views
International Workshop on Similarity Search. A Similarity Evaluation Technique for Data Mining with Ensemble of Classifiers. Seppo Puuronen, Vagan Terziyan. 1-2 September, 1999 Florence (Italy). Authors. Seppo Puuronen. sepi@jytko.jyu.fi. Vagan Terziyan. vagan@jytko.jyu.fi.
E N D
International Workshop on Similarity Search A Similarity Evaluation Technique for Data Mining with Ensemble of Classifiers Seppo Puuronen, Vagan Terziyan 1-2 September, 1999 Florence (Italy)
Authors Seppo Puuronen sepi@jytko.jyu.fi Vagan Terziyan vagan@jytko.jyu.fi Department of Computer Science and Information Systems University of Jyvaskyla FINLAND Department of Artificial Intelligence Kharkov State Technical University of Radioelectronics, UKRAINE
Contents • The Research Problem and Goal • Basic Concepts • External Similarity Evaluation • Evaluation of Classifiers Competence • An Example • Internal Similarity Evaluation • Conclusions
The Research Problem During the past several years, in a variety of application domains, researchers in machine learning, computational learning theory, pattern recognition and statistics have tried to combine efforts to learn how to create and combine an ensemble of classifiers. The primary goal of combining several classifiers is to obtain a more accurate prediction than can be obtained from any single classifier alone.
Goal • The goal of this research is to develop simple similarity evaluation technique to be used for classification problem based on an ensemble of classifiers • Classification here is finding of an appropriate class among available ones for certain instance based on classifications produced by an ensemble of classifiers
Basic Concepts:Training Set (TS) • TSof an ensemble of classifiers is a quadruple: <D,C,S,P> • Dis the set of instances D1, D2,..., Dn to be classified; • C is the set of classes C1, C2,..., Cm ,that are used to classify the instances; • Sis the set of classifiers S1, S2,..., Sr , which select classes to classify the instances; • Pis the set of semantic predicates that define relationships between D, C, S
Problem 1:Deriving External Similarity Values Classes Instances Classifiers
External Similarity Values External Similarity Values (ESV): binary relations DC, SC, and SD between the elements of (sub)sets of D and C; S and C; and S and D. ESV are based on total support among all the classifiers for voting for the appropriate classification (or refusal to vote)
Problem 2:Deriving Internal Similarity Values Classes Instances Classifiers
Internal Similarity Values Internal Similarity Values (ISV): binary relations between two subsets of D, two subsets of C and two subsets of S. ISV are based on total support among all the classifiers for voting for the appropriate connection (or refusal to vote)
Why we Need Similarity Values (or Distance Measure) ? • Distance between instances is used by agents to recognize nearest neighbors for any classified instance • distance between classes is necessary to define the misclassification error during the learning phase • distance between classifiers is useful to evaluate weights of all classifiers to be able to integrate them by weighted voting
Deriving External Relation DC:How well class fits the instance Classes Instances Classifiers
Deriving External Relation SC:Measures Classifiers Competence in the Area of Classes • The value of the relation (Sk,Cj) in a way represents the total support that the classifier Sk obtains selecting (refusing to select) the class Cj to classify all the instances.
Example of SC Relation Classes Instances Classifiers
Deriving External Relation SD:Measures “Competence” of Classifiers in the Area of Instances • The value of the relation (Sk,Di) represents the total support that the classifier Sk receives selecting (or refusing to select) all the classes to classify the instance Di.
Example of SD Relation Instances Classes Classifiers
Standardizing External Relations to the Interval [0,1] nis the number of instances mis the number of classes ris the number of classifiers
Competence of a Classifier Classes Instances Conceptual pattern of class definition Cj Conceptual pattern of features Di Competence in the Instance Area Competence in the Area of Classes Classifier
Classifier’s Evaluation:Competence Quality in an Instance Area - measure of the “classification abilities” of a classifier relatively to instances from the support point of view
Agent’s Evaluation:Competence Quality in the Area of Classes - measure of the “classification abilities” of a classifier in the correct use of classes from the support point of view
Quality Balance Theorem The evaluation of a classifier’s competence (ranking, weighting, quality evaluation) does not depend on the competence area “real world of instances” or “conceptual world of classes” because both competence values are always equal
Proof ... ...
An Example • Let us suppose that four classifiers have to classify three papers submitted to a conference with five conference topics • The classifiers should define their selection of appropriate conference topic for every paper • The final goal is to obtain a cooperative result of all the classifiers concerning the “paper - topic” relation
C (classes) Set in the Example Classes - Conference PapersNotation AI and Intelligent Systems C1 Analytical Technique C2 Real-Time Systems C3 Virtual Reality C4 Formal Methods C5
S (classifiers) Set in the Example Classifiers - “Referees” Notation A.B. S1 H.R. S2 M.L. S3 R.S. S4
Selections Made for the Instance“Paper 1” D1 P(D,C,S) C1 C2 C3 C4 C5 S11 -1 -1 0 -1 S20+ -1** 0 ++ 1* -1*** S30 0 -1 1 0 S41 -1 0 0 1 Classifier H.R. considers “Paper 1” to fit to topic Virtual Reality* and refuses to include it to Analytical Technique** or Formal Methods***.H.R. does not choose or refuse to choose the AI and Intelligent Systems+or Real-Time Systems++ topics to classify “Paper 1”.
Selections Made for the Instance“Paper 2” D2 P C1 C2 C3 C4 C5 S1-1 0 -1 0 1 S21 -1 -1 0 0 S31 -1 0 1 1 S4-1 0 0 1 0
Selections Made for the Instance“Paper 3” D3 P C1 C2 C3 C4 C5 S11 0 1 -1 0 S20 1 0 -1 1 S3-1 -1 1 -1 1 S4-1 -1 1 -1 1
Result of Cooperative Paper Classification Based on DC Relation
Results of Classifiers’ Competence Evaluation (based on SC and SD sets) … Proposals obtained from the classifier A.B. should be accepted if they concern topics Real-Time Systems and Virtual Reality or instances “Paper 1” and “Paper 3”, and these proposals should be rejected if they concern AI and Intelligent Systems or “Paper 2”. In some cases it seems to be possible to accept classification proposals from the classifier A.B. if they concern Analytical Technique and Formal Methods. All four classifiers are expected to give an acceptable proposals concerning “Paper 3” and only suggestion of the classifier M.L. can be accepted if it concerns “Paper 2” ...
Deriving Internal Similarity Values Via one intermediate set Via two intermediate sets
Internal Similarity for Classifiers:Instance-Based Similarity Instances Classifiers
Internal Similarity for Classifiers:Class-Based Similarity Classes Classifiers
Internal Similarity for Classifiers:Class-Instance-Based Similarity Classes Instances Classifiers
Conclusion • Discussion was given to methods of deriving the total support of each binary similarity relation. This can be used, for example, to derive the most supported classification result and to evaluate the classifiers according to their competence • We also discussed relations between elements taken from the same set: instances, classes, or classifiers. This can be used, for example, to divide classifiers into groups of similar competence relatively to the instance-class environment