240 likes | 432 Views
Oblique Decision Trees Using Householder Reflection. Chitraka Wickramarachchi Dr. Blair Robertson Dr. Marco Reale Dr. Chris Price Prof. Jennifer Brown. Outline of the Presentation. Introduction Literature Review Methodology Results and Discussion.
E N D
Oblique Decision Trees Using Householder Reflection ChitrakaWickramarachchi Dr. Blair Robertson Dr. Marco Reale Dr. Chris Price Prof. Jennifer Brown
Outline of the Presentation • Introduction • Literature Review • Methodology • Results and Discussion
Example: A bank wants to predict the potential status (Default or not) of a new credit card customer For the existing customers the bank has following data Introduction Possible approach - Generalized Linear models with binomial errors Model become complex if the structure of the data is complex.
Decision tree is a tree structured classifier. Decision Tree (DT) Root Node Salary <= s Test based on features TCA < tc Non-Terminal Node TLA < tl ND D D Terminal Node
Recursively partition the feature space into disjoint sub-regions until each sub-region becomes homogeneous with respect to a particular class Partitions X2 X1
Choosing the best split X2 <= 0.6819 X1<= 0.4026 0.0345 X1<= 0.5713 0.1586 0.1586 0.1412 0.1224 0.0654 0.0015 0.0895 0.1123 0.0221 0.1546 0.1546
Types of DTs Decision Trees Univariate DT Multivariate DT Non-Linear DT Linear DT Axis parallel splits Oblique splits
Axis parallel splits Advantages • Easy to implement • Computer complexity is low • Easy to interpret Disadvantage • When the true boundaries are not axis parallel it produces complicated boundary structure
Axis parallel boundaries X2 X1
Oblique splits Advantage - Simple boundary structure X2 X1
Oblique splits Disadvantages • Implementation is challenging • Computer complexity is high X2 X1 Therefore computationally less expensive oblique tree induction method would be desirable
Literature Review Oblique splits search for splits in the form of Breiman et al. (1984) • CART – LC • Starts with the best axis parallel split • Perturb each coefficient until find the best split Limitations • Can get trapped in local mimina • No upper bound on the time spent at any node
Literature Review Heath et al. (1993) • Simulated annealing Decision Trees (SADT) • First places a hyperplane in a canonical location • Perturb each coefficient randomly By randomization - try to escape from the local mimima Limitations Algorithm runs much slower than CART- LC
Literature Review Murthy et al. (1994) • Oblique Classifier 1 (OC1) • Start with the best axis parallel split • Perturb each coefficient • At a local mimima, perturb the hyperlane randomly Since 1994, there are many ODT induction methods have been developed based on EA algorithms and neural network concept
Proposed Methodology Our approach is to • Transform the data set parallel to one of the feature axes • Implement axis parallel splits • Back-transform them in to the original space Transformation is done using Householder reflection.
Householder Reflection Let X and Y are vectors with the same norm there exists orthogonal symmetric matrix P such that where
Householder Reflection Orientation of a cluster can be represented by the dominant Eigen vector of its variance covariance matrix. X2 X1
Householder Reflection X2 X1
Cost-complexity pruning To avoid over-fitting Accuracy Number of Terminal Nodes
Results and Discussion Data sets - UCI Machine Learning Repository Estimate of the accuracy was obtained by ten 5-fold cross validation experiments.
Results and Discussion • Results • High accuracy • Computationally inexpensive