630 likes | 685 Views
Decision Tree. Decision Tree. Outlook. Sunny. Overcast. Rain. Humidity. Yes. Wind. High. Normal. Strong. Weak. No. Yes. No. Yes. Outlook. sunny. overcast. rain. overcast. humidity. windy. P. high. normal. false. true. N. P. N. P. An Example. categorical.
E N D
Decision Tree Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes
Outlook sunny overcast rain overcast humidity windy P high normal false true N P N P An Example
categorical categorical continuous class Example of a Decision Tree Splitting Attributes Refund Yes No NO MarSt Married Single, Divorced TaxInc NO < 80K > 80K YES NO Model: Decision Tree Training Data
NO Another Example of Decision Tree categorical categorical continuous class Single, Divorced MarSt Married NO Refund No Yes TaxInc < 80K > 80K YES NO There could be more than one tree that fits the same data!
Classification by Decision Tree Induction • Decision tree • A flow-chart-like tree structure • Internal node denotes an attribute • Branch represents the values of the node attribute • Leaf nodes represent class labels or class distribution
Decision trees high Income? yes no Criminal record? NO yes no NO YES
+a, -c, +i, +e, +o, +u: Y -a, +c, -i, +e, -o, -u: N +a, -c, +i, -e, -o, -u: Y -a, -c, +i, +e, -o, -u: Y -a, +c, +i, -e, -o, -u: N -a, -c, +i, -e, -o, +u: Y +a, -c, -i, -e, +o, -u: N +a, +c, +i, -e, +o, -u: N Constructing a decision tree, one step at a time address? yes no -a, +c, -i, +e, -o, -u: N -a, -c, +i, +e, -o, -u: Y -a, +c, +i, -e, -o, -u: N -a, -c, +i, -e, -o, +u: Y +a, -c, +i, +e, +o, +u: Y +a, -c, +i, -e, -o, -u: Y +a, -c, -i, -e, +o, -u: N +a, +c, +i, -e, +o, -u: N criminal? criminal? no yes no yes -a, +c, -i, +e, -o, -u: N -a, +c, +i, -e, -o, -u: N -a, -c, +i, +e, -o, -u: Y -a, -c, +i, -e, -o, +u: Y +a, -c, +i, +e, +o, +u: Y +a, -c, +i, -e, -o, -u: Y +a, -c, -i, -e, +o, -u: N +a, +c, +i, -e, +o, -u: N income? Address was maybe not the best attribute to start with… no yes +a, -c, +i, +e, +o, +u: Y +a, -c, +i, -e, -o, -u: Y +a, -c, -i, -e, +o, -u: N
Decision Tree Learning • Building a Decision Tree • First test all attributes and select the on that would function as the best root; • Break-up the training set into subsets based on the branches of the root node; • Test the remaining attributes to see which ones fit best underneath the branches of the root node; • Continue this process for all other branches until • all examples of a subset are of one type • there are no examples left (return majority classification of the parent) • there are no more attributes left (default value should be majority classification)
Decision Tree Learning • Determining which attribute is best (Entropy & Gain) • Entropy (E) is the minimum number of bits needed in order to classify an arbitrary example as yes or no • E(S) = ci=1 –pi log2 pi , • Where S is a set of training examples, • c is the number of classes, and • pi is the proportion of the training set that is of class i • For our entropy equation 0 log2 0 = 0 • The information gain G(S,A) where A is an attribute • G(S,A) E(S) - v in Values(A) (|Sv| / |S|) * E(Sv)
[29+,35-] A1=? A2=? [29+,35-] True False True False [18+, 33-] [21+, 5-] [8+, 30-] [11+, 2-] Information Gain • Gain(S,A): expected reduction in entropy due to sorting S on attribute A Gain(S,A)=Entropy(S) - vvalues(A) |Sv|/|S| Entropy(Sv) Entropy([29+,35-]) = -29/64 log2 29/64 – 35/64 log2 35/64 = 0.99
ID3 Algorithm [D1,D2,…,D14] [9+,5-] Outlook Sunny Overcast Rain Ssunny=[D1,D2,D8,D9,D11] [2+,3-] [D3,D7,D12,D13] [4+,0-] [D4,D5,D6,D10,D14] [3+,2-] Yes ? ? Gain(Ssunny , Humidity)=0.970-(3/5)0.0 – 2/5(0.0) = 0.970 Gain(Ssunny , Temp.)=0.970-(2/5)0.0 –2/5(1.0)-(1/5)0.0 = 0.570 Gain(Ssunny , Wind)=0.970= -(2/5)1.0 – 3/5(0.918) = 0.019
ID3 Algorithm Outlook Sunny Overcast Rain Humidity Yes Wind [D3,D7,D12,D13] High Normal Strong Weak No Yes No Yes [D6,D14] [D4,D5,D10] [D8,D9,D11] [D1,D2]
Outlook Sunny Overcast Rain Humidity Yes Wind High Normal Strong Weak No Yes No Yes Converting a Tree to Rules R1: If (Outlook=Sunny) (Humidity=High) Then PlayTennis=No R2: If (Outlook=Sunny) (Humidity=Normal) Then PlayTennis=Yes R3: If (Outlook=Overcast) Then PlayTennis=Yes R4: If (Outlook=Rain) (Wind=Strong) Then PlayTennis=No R5: If (Outlook=Rain) (Wind=Weak) Then PlayTennis=Yes
Decision tree classifiers • Does not require any prior knowledge of data distribution, works well on noisy data. • Has been applied to: • classify medical patients based on the disease, • equipment malfunction by cause, • loan applicant by likelihood of payment.
Decision trees Good Bad Bad Good • The internal nodes are simple decision rules on one or more attributes and leaf nodes are predicted class labels. Salary < 1 M Prof = teaching Age < 30
Decision Tree Analysis • For choosing the best courseof action when future outcomes are uncertain.
Classification • Data: It has k attributes A1, … Ak. Each tuple (case or example) is described by values of the attributes and a class label. • Goal: To learn rules or to build a model that can be used to predict the classes of new (or future or test) cases. • The data used for building the model is called the training data.
Data and its format • Data • attribute-value pairs • with/without class • Data type • continuous/discrete • nominal • Data format • Flat • If not flat, what should we do?
Induction Algorithm • We calculate the gain for each attribute and choose the max gain to be the node in the tree. • After build the node calculate the gain for other attribute and choose again the max of them.
Decision Tree Induction • Many Algorithms: • Hunt’s Algorithm (one of the earliest) • CART • ID3, C4.5 • SLIQ,SPRINT
Decision Tree Induction Algorithm • Create a root node for the tree • If all cases are positive, return single-node tree with label + • If all cases are negative, return single-node tree with label - • Otherwise begin • For each possible value of node • Add a new tree branch • Let cases be subset of all data that have this value • Add new node with new subtree until leaf node
Tests for Choosing Best function • Purity (Diversity) Measures: • Gini (population diversity) • Entropy (information gain) • Information Gain Ratio • Chi-square Test We will only explore Gini in class
Measures of Node Impurity • Gini Index • Entropy • Misclassification error
Alternative Splitting Criteria based on INFO • Entropy at a given node t: (NOTE: p( j | t) is the relative frequency of class j at node t). • Measures homogeneity of a node. • Maximum (log nc) when records are equally distributed among all classes implying least information • Minimum (0.0) when all records belong to one class, implying most information • Entropy based computations are similar to the GINI index computations
Impurity Measures • Information entropy: • Zero when consisting of only one class, one when all classes in equal number • Other measures of impurity: Gini:
Splitting Based on INFO... • Information Gain: Parent Node, p is split into k partitions; ni is number of records in partition i • Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) • Used in ID3 and C4.5 • Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure.
Review of Log2 log2(0) = unkown log2(1) = 0 log2(2) = 1 log2(4) = 2 log2(1/2) = -1 log2(1/4) = -2 (1/2)log2(1/2) = (1/2)(-1) = -1/2
Classification—A Two-Step Process • Model construction: describing a set of predetermined classes based on a training set. It is also called learning. • Each tuple/sample is assumed to belong to a predefined class • The model is represented as classification rules, decision trees, or mathematical formulae • Model usage: for classifying future test data/objects • Estimate accuracy of the model • The known label of test example is compared with the classified result from the model • Accuracy rate is the % of test cases that are correctly classified by the model • If the accuracy is acceptable, use the model to classify data tuples whose class labels are not known.
Decision Tree Classification Task Decision Tree
Training Data Classifier (Model) Classification Process (1): Model Construction Classification Algorithms IF rank = ‘professor’ OR years > 6 THEN tenured = ‘yes’
Classifier Testing Data Unseen Data Classification Process (2): Use the Model in Prediction (Jeff, Professor, 4) Tenured?
Example • consider the problem of learning whether or not to go jogging on a particular day.
ID3: Some Issues • Sometimes we arrive to a node with no examples. This means that the example has not been observed. We just assigned as value the majority vote of its parent • Sometimes we arrive to a node with both positive and negative examples and no attributes left. This means that there is noise in the data. We just assigned as value the majority vote of the examples
Problems with Decision Tree • ID3 is not optimal • Uses expected entropy reduction, not actual reduction • Must use discrete (or discretized) attributes • What if we left for work at 9:30 AM? • We could break down the attributes into smaller values…
Problems with Decision Trees • While decision trees classify quickly, the time for building a tree may be higher than another type of classifier • Decision trees suffer from a problem of errors propagating throughout a tree • A very serious problem as the number of classes increases
Decision Tree characteristics • The training data may contain missing attribute values. • Decision tree methods can be used even when some training examples have unknown values.
Unknown Attribute Values What is some examples missing values of A? Use training example anyway sort through tree • If node n tests A, assign most common value of A among other examples sorted to node n. • Assign most common value of A among other examples with same target value • Assign probability pi to each possible value vi of A • Assign fraction pi of example to each descendant in tree
Rule Generation Once a decision tree has been constructed, it is a simple matter to convert it into set of rules. • Converting to rules allows distinguishing among the different contexts in which a decision node is used.
Rule Generation • Converting to rules improves readability. • Rules are often easier for people to understand. • To generate rules, trace each path in the decision tree, from root node to leaf node
Rule Simplification Once a rule set has been devised: • Once individual rules have been simplified by eliminating redundant rules and unnecessary rules. • Attempt to replace those rules that share the most common consequent by a default rule that is triggered when no other rule is triggered.