300 likes | 431 Views
10th International Conference and Workshop on Database and Expert Systems Applications. DEXA-99. Mining Several Databases with an Ensemble of Classifiers. Seppo Puuronen Vagan Terziyan Alexander Logvinovsky. August 30 - September 3, 1999 Florence, Italy. Authors. Seppo Puuronen.
E N D
10th International Conference and Workshop on Database and Expert Systems Applications DEXA-99 Mining Several Databases with an Ensemble of Classifiers Seppo Puuronen Vagan Terziyan Alexander Logvinovsky August 30 - September 3, 1999 Florence, Italy
Authors Seppo Puuronen Vagan Terziyan vagan@jytko.jyu.fi sepi@jytko.jyu.fi Department of Artificial Intelligence Kharkov State Technical University of Radioelectronics, UKRAINE Department of Computer Science and Information Systems University of Jyvaskyla FINLAND Alexander Logvinovsky alexander.logvinovsky@usa.net Department of Artificial Intelligence Kharkov State Technical University of Radioelectronics, UKRAINE
Contents • The problem of “multiclassifiers” - “multidatabase” mining; • Case “One Database - Many Classifiers”; • Dynamic integration of classifiers; • Case “One Classifier - Many Databases”; • Weighting databases; • Case “Many Databases - Many Classifiers”; • Context-based trend within the classifiers predictions and decontextualization; • Conclusion
Dynamic Integration of Classifiers • Final classification is made by weighted voting of classifiers from the ensemble; • Weights of classifiers are recalculated for every new instance; • Weighting is based on predicted errors of the classifiers in the neighborhood area of the instance
Sliding Exam of a Classifier (Predictor, Interpolator) • Remove an instance y(xi) from training set; • Use a classifier to derive prediction result y’(xi); • Evaluate difference as distance between real and predicted values • Continue for every instance
Brief Review of Distance FunctionsAccording to D. Wilson and T. Martinez (1997)
PEBLSDistance Evaluation for Nominal Values(According to Cost S. and Salzberg S., 1993) The distance di between two values v1 and v2 for certain instance is: where C1andC2 are the numbers of instances in the training set with selected values v1 and v2, C1i andC2i are the numbers of instances from the i-th class, where the values v1 and v2 were selected, and k is the number of classes of instances
Interpolation of Error Function Based on Hypothesis of Compactness | x - xi | < ( 0) | (x) - (xi) | 0
Competence map • absolute difference • weight function
Integration of Databases • Final classification of an instance is obtained by weighted voting of predictions made by the classifier for every database separately; • Weighting is based on taking the integral of the error function of the classifier across every database
Weighting Classifiers and Databases Prediction and weight of a database Prediction and weight of a classifier
Solutions for MANY:MANY 1 1 3 3 2 2
Decontextualization of Predictions • Sometimes actual value cannot be predicted as weighted mean of individual predictions of classifiers from the ensemble; • It means that the actual value is outside the area of predictions; • It happens if classifiers are effected by the same type of a context with different power; • It results to a trend among predictions from the less powerful context to the most powerful one; • In this case actual value can be obtained as the result of “decontextualization” of the individual predictions
Neighbor Context Trend y actual value: “ideal context” y(xi) prediction in (1,2,3) neighbor context: “better context” y+(xi) 2 y-(xi) 1 prediction in (1,2) neighbor context: “worse context” 3 xi x
Main Decontextalization Formula y y’ y- y+ Y ’ y- - prediction in worse context y+ - prediction in better context + y’ - decontextualized prediction - y - actual value -·+ ’ = ’ < - ;’<+ -+ + + < -
Decontextualization • One level decontextualization • All subcontexts decontextualization • Decontextualized difference • New sample classification
Physical Interpretation of Decontextualization predicted values actual value R1 yi- - prediction in worse context y+ - prediction in better context R2 y’ - decontextualized prediction y’ y - actual value y+ y y- y actual value decontextualized value Rres Uncertainty is like a “resistance” for precise prediction
Conclusion • Dynamic integration of classifiers based on locally adaptive weights of classifiers allows to handle the case «One Dataset - Many Classifiers»; • Integration of databases based on their integral weights relatively to the classification accuracy allows to handle the case «One Classifier - Many Datasets»; • Successive or parallel application of the two abowe algorithms allows a variety of solutions for the case «Many Classifiers - Many Datasets»; • Decontextualization as the opposite to weighted voting way of integration of classifiers allows to handle context of classification in the case of a trend