480 likes | 576 Views
Data Mining – Algorithms: Prism – Learning Rules via Separating and Covering. Chapter 4, Section 4.4. Rules. Can be directly read off a decision tree – but those might not be the most compact or effective rules
E N D
Data Mining – Algorithms: Prism – Learning Rules via Separating and Covering Chapter 4, Section 4.4
Rules • Can be directly read off a decision tree – but those might not be the most compact or effective rules • Common approach – take each class in turn and find a way of “covering” all instances in it, while excluding instances not in the class
Let’s use My Weather Data Again • Again, Let’s take this a little more realistic than book does • Divide into training and test data • Let’s save the last record as a test • (using my weather, nominal … and assuming we’re working on the play?=yes class first … • We’re looking for a rule in the form if ___ Then play? = yes • Possible ways of filling include: • Outlook = sunny • Outlook = overcast • … • Temperature = hot • …
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
Refining Rule • If this rule is not accurate enough for us (based on a threshold), we’re going to try to refine it by adding a clause(s) • Now, we’re looking to fill in a clause in the following: if outlook = sunny and _____ then play? = yes • We consider the accuracy of all possible ways of filling this blank …
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
Still more to cover though • This rule only covers 2 of the 6 play=yes days • This approach looks more for pockets of a success whereas ID3 is looking more at the big picture • So we temporarily toss those 2 instances and work on another rule
Outlook Temp Humid Windy Play? sunny hot high FALSE no sunny hot high TRUE yes overcast hot high FALSE no rainy mild high FALSE no rainy cool normal FALSE no rainy cool normal TRUE no overcast cool normal TRUE yes sunny cool normal FALSE yes rainy mild normal FALSE no overcast mild high TRUE yes overcast hot normal FALSE no rainy mild high TRUE no Example: My Weather (Nominal) TEST
We’re Looking for another rule … • in the form if ___ Then play? = yes • Again, possible ways of filling include: • Outlook = sunny • Outlook = overcast • … • Temperature = hot • … • However, our data is a little different now
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
Refining Rule • If this rule is not accurate enough for us (based on a threshold), we’re going to try to refine it by adding a clause(s) • Now, we’re looking to fill in a clause in the following: if windy = TRUE and _____ then play? = yes • We consider the accuracy of all possible ways of filling this blank …
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
Still more to cover though • The rules so far cover 4 of the 6 play=yes days • So we temporarily toss the 2 instances covered by the second rule and work on another rule
Outlook Temp Humid Windy Play? sunny hot high FALSE no overcast hot high FALSE no rainy mild high FALSE no rainy cool normal FALSE no rainy cool normal TRUE no overcast cool normal TRUE yes sunny cool normal FALSE yes rainy mild normal FALSE no overcast hot normal FALSE no rainy mild high TRUE no Example: My Weather (Nominal) TEST
We’re Looking for another rule … • in the form if ___ Then play? = yes • Again, we’ll try all possible ways of filling • … on our reduced data
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
Refining Rule • If this rule is not accurate enough for us (based on a threshold – and at 50% it almost assuredly isn’t), we’re going to try to refine it by adding a clause(s) • Now, we’re looking to fill in a clause in the following: if temp = cool and _____ then play? = yes • We consider the accuracy of all possible ways of filling this blank …
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
So Far, We Have 3 Rules … • Still more to cover though • The rules so far cover 5 of the 6 play=yes days • So we temporarily toss the 1 instance covered by the third rule and work on another rule • if Outlook = Sunny & Temp = Mild Then Play? = yes • If Windy = TRUE & Humid = High Then Play? = yes • If Temp = Cool & Outlook = Sunny Then Play? = yes
Outlook Temp Humid Windy Play? sunny hot high FALSE no overcast hot high FALSE no rainy mild high FALSE no rainy cool normal FALSE no rainy cool normal TRUE no overcast cool normal TRUE yes rainy mild normal FALSE no overcast hot normal FALSE no rainy mild high TRUE no Example: My Weather (Nominal) TEST
Again we’re looking for another rule … • in the form if ___ Then play? = yes • Again, we’ll try all possible ways of filling • … on our reduced data
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
Refining Rule • If this rule is not accurate enough for us (based on a threshold – and at 50% it almost assuredly isn’t), we’re going to try to refine it by adding a clause(s) • Now, we’re looking to fill in a clause in the following: if Windy = True and _____ then play? = yes • We consider the accuracy of all possible ways of filling this blank …
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
We’ve Covered all Yes Instances • We Have 4 Rules … • if Outlook = Sunny & Temp = Mild Then Play? = yes • If Windy = TRUE & Humid = High Then Play? = yes • If Temp = Cool & Outlook = Sunny Then Play? = yes • If Windy = TRUE & Outlook = Overcast Then Play? = yes • It’s time to work on the next class • (remember to bring back all of the instances) • (since it is the last class, we might create a default rule – anything else is play?=no)
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side (play? = no)
Still more to cover though • This rule only covers 4 of the 7 play=no days • So we temporarily toss those 4 instances and work on another rule
Outlook Temp Humid Windy Play? sunny hot high FALSE no sunny hot high TRUE yes overcast hot high FALSE no overcast cool normal TRUE yes sunny mild high FALSE yes sunny cool normal FALSE yes sunny mild normal TRUE yes overcast mild high TRUE yes overcast hot normal FALSE no rainy mild high TRUE no Example: My Weather (Nominal) TEST
We’re Looking for another rule … • in the form if ___ Then play? = no
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
Refining Rule • If this rule is not accurate enough for us (based on a threshold), we’re going to try to refine it by adding a clause(s) • Now, we’re looking to fill in a clause in the following: if Temp = Hot and _____ then play? = no • We consider the accuracy of all possible ways of filling this blank …
Find the best filler using training data • We look at proportion of instances that match the left hand side that also match the right hand side
We’ve Done It! • The 2 rules so far cover all 7 of the play=no days • So we have a 6 rule set of rules based on this training data • if Outlook = Sunny & Temp = Mild Then Play? = yes • If Windy = TRUE & Humid = High Then Play? = yes • If Temp = Cool & Outlook = Sunny Then Play? = yes • If Windy = TRUE & Outlook = Overcast Then Play? = yes • If Outlook = Rainy Then Play? = no • If Temp = Hot & Windy = False Then Play? = no • Note that the rules for a given category is considered an ordered set of rules, but between categories there is no order implied – there may be a conflict!
Now, suppose we must predict the test instance • Rainy, mild, high, true • Rule 2 concludes play?=yes (incorrectly) • Rule 5 concludes play?=no (correctly) • One possible way of dealing with this conflict is to favor the rule that has greatest coverage (most instances in support of it) in the training data • In this case, Rule 2 has 2 instances in support, and Rule 5 has 4 instances in support
In a 14-fold cross validation, this would continue 13 more times • Let’s run WEKA on this … Prism …
WEKA results – first look near the bottom === Stratified cross-validation === === Summary === Correctly Classified Instances 12 85.7143 % Incorrectly Classified Instances 2 14.2857% ============================================ • On the cross validation – it got 12 out of 14 tests correct • Wins BIG over other approaches tried so far!
More Detailed Results === Confusion Matrix === a b <-- classified as 5 1 | a = yes 1 7 | b = no ==================================== • Here we see –the program 6 times predicted play=yes, on 5 of those it was correct – • The program 8 times predicted play = no, on 7 of those it was correct • There were 6 instances whose actual value was play=yes, the program correctly predicted that on 5 of them • There were 8 instances whose actual value was play=no, the program correctly predicted that on 7 of them • All-in-all, uniformly good prediction
Again, part of our purpose is to have a take-home message for humans • Not 14 take home messages! • So instead of reporting each of the things learned on each of the 14 training sets … • … The program runs again on all of the data and builds a pattern for that – a take home message
WEKA - Take-Home === Classifier model (full training set) === Prism rules ---------- If outlook = sunny and temperature = mild then yes If outlook = sunny and temperature = cool then yes If windy = TRUE and outlook = overcast then yes If outlook = sunny and windy = TRUE then yes If outlook = rainy then no If temperature = hot and windy = FALSE then no
Let’s Try WEKA Prism on njcrimenominal • Try 10-fold === Confusion Matrix === a b <-- classified as 5 2 | a = bad 6 19 | b = ok • This represents the same accuracy as with Naïve Bayes • We note that OneR chose unemployment as the attribute to use, with Prism, it is the first thing tested for each class, but if it is not high or low, other attributes are taken into account …
Prism’s rules for njcrimenominal: === Classifier model (full training set) === Prism rules If unemploy = hi then bad If popdens = med and education = low then bad If pop = med and popdens = med then bad If unemploy = med and education = low and pop = low then bad If education = med and unemploy = med and twoparent = med then bad If unemploy = low then ok If education = hi then ok If pop = med and popdens = low then ok If twoparent = low and unemploy = med and popdens = low then ok
Prism – Missing Values • Prism cannot handle
Prism – Numeric Values • Prism cannot handle • Easy to imagine a simple rule learner that could handle them (in regular attributes) • See example introducing section, where thresholds are chosen for numeric attributes as part of adding clauses to rules • No chance of ever handling numeric prediction
Prism – Discussion • Prism tries to fit training data 100% • This presents a serious risk for overfitting!! • Simple variation is to lower accuracy threshold • May need experimentation to find suitable threshold • Needs conflict resolution between classes if more than one class is predicted • Needs means of dealing with if no class is predicted
Class Exercise • Let’s run WEKA Prism on japanbank • Need nominal attributes – so discretize first