220 likes | 429 Views
2. The Task. Two ways of defining the taskGeneralInput: A collection of instancesOutput: rules to predict the values of any attribute(s) (not just the class attribute) from values of other attributesE.g. if temperature = cool then humidity =normalIf the right hand side of a rule has only the cl
E N D
1. Association Rule Mining
2. 2 The Task Two ways of defining the task
General
Input: A collection of instances
Output: rules to predict the values of any attribute(s) (not just the class attribute) from values of other attributes
E.g. if temperature = cool then humidity =normal
If the right hand side of a rule has only the class attribute, then the rule is a classification rule
Distinction: Classification rules are applied together as sets of rules
Specific - Market-basket analysis
Input: a collection of transactions
Output: rules to predict the occurrence of any item(s) from the occurrence of other items in a transaction
E.g. {Milk, Diaper} -> {Beer}
General rule structure:
Antecedents -> Consequents
3. Association Rule Mining Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction
4. Definition: Frequent Itemset Itemset
A collection of one or more items
Example: {Milk, Bread, Diaper}
k-itemset
An itemset that contains k items
Support count (?)
Frequency of occurrence of an itemset
E.g. ?({Milk, Bread,Diaper}) = 2
Support
Fraction of transactions that contain an itemset
E.g. s({Milk, Bread, Diaper}) = 2/5
Frequent Itemset
An itemset whose support is greater than or equal to a minsup threshold
5. Definition: Association Rule
6. Association Rule Mining Task Given a set of transactions T, the goal of association rule mining is to find all rules having
support = minsup threshold
confidence = minconf threshold
Brute-force approach:
List all possible association rules
Compute the support and confidence for each rule
Prune rules that fail the minsup and minconf thresholds
? Computationally prohibitive!
7. Mining Association Rules
8. Mining Association Rules Two-step approach:
Frequent Itemset Generation
Generate all itemsets whose support ? minsup
Rule Generation
Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset
Frequent itemset generation is still computationally expensive
9. 9 Power set Given a set S, power set, P is the set of all subsets of S
Known property of power sets
If S has n number of elements, P will have N = 2n number of elements.
Examples:
For S = {}, P={{}}, N = 20 = 1
For S = {Milk}, P={{}, {Milk}}, N=21=2
For S = {Milk, Diaper}
P={{},{Milk}, {Diaper}, {Milk, Diaper}}, N=22=4
For S = {Milk, Diaper, Beer},
P={{},{Milk}, {Diaper}, {Beer}, {Milk, Diaper}, {Diaper, Beer}, {Beer, Milk}, {Milk, Diaper, Beer}}, N=23=8
10. 10 Brute Force approach to Frequent Itemset Generation For an itemset with 3 elements, we have 8 subsets
Each subset is a candidate frequent itemset which needs to be matched against each transaction
11. Reducing Number of Candidates Apriori principle:
If an itemset is frequent, then all of its subsets must also be frequent
Apriori principle holds due to the following property of the support measure:
Support of an itemset never exceeds the support of its subsets
This is known as the anti-monotone property of support
12. Illustrating Apriori Principle
13. Apriori Algorithm Method:
Let k=1
Generate frequent itemsets of length 1
Repeat until no new frequent itemsets are identified
Generate length (k+1) candidate itemsets from length k frequent itemsets
Prune candidate itemsets containing subsets of length k that are infrequent
Count the support of each candidate by scanning the DB
Eliminate candidates that are infrequent, leaving only those that are frequent
14. Rule Generation Given a frequent itemset L, find all non-empty subsets f ? L such that f ? L – f satisfies the minimum confidence requirement
If {A,B,C,D} is a frequent itemset, candidate rules:
ABC ?D, ABD ?C, ACD ?B, BCD ?A, A ?BCD, B ?ACD, C ?ABD, D ?ABCAB ?CD, AC ? BD, AD ? BC, BC ?AD, BD ?AC, CD ?AB,
If |L| = k, then there are 2k – 2 candidate association rules (ignoring L ? ? and ? ? L)
Because, rules are generated from frequent itemsets, they automatically satisfy the minimum support threshold
Rule generation should ensure production of rules that satisfy only the minimum confidence threshold
15. Rule Generation How to efficiently generate rules from frequent itemsets?
In general, confidence does not have an anti-monotone property
c(ABC ?D) can be larger or smaller than c(AB ?D)
But confidence of rules generated from the same itemset has an anti-monotone property
e.g., L = {A,B,C,D}: c(ABC ? D) ? c(AB ? CD) ? c(A ? BCD)
Confidence is anti-monotone w.r.t. number of items on the RHS of the rule
Example: Consider the following two rules
{Milk} ? {Diaper, Beer} has cm=0.5
{Milk, Diaper} ? {Beer} has cmd=0.67 > cm
16. 16 Computing Confidence for Rules Unlike computing support, computing confidence does not require several passes over the transaction list
Supports computed from frequent itemset generation can be reused
Tables on the side show support values for all (except null) the subsets of itemset {Bread, Milk, Diaper}
Confidence values are shown for the (23-2) = 6 rules generated from this frequent itemset
17. 17 Rule generation in Apriori Algorithm For each frequent k-itemset where k > 2
Generate high confidence rules with one item in the consequent
Using these rules, iteratively generate high confidence rules with more than one items in the consequent
if any rule has low confidence then all the other rules containing the consequents can be pruned (not generated) {Bread, Milk, Diaper}
1-item rules
{Bread, Milk} ? {Diaper}
{Milk, Diaper} ? {Bread}
{Diaper, Bread} ? {Milk}
2-item rules
{Bread} ? {Milk, Diaper} {Diaper} ? {Milk, Bread}
{Milk} ? {Diaper, Bread}
18. 18 Evaluation Support and confidence used by Apriori allow a lot of rules which are not necessarily interesting
Two options to extract interesting rules
Using subjective knowledge
Using objective measures (measures better than confidence)
Subjective approaches
Visualization – users allowed to interactively verify the discovered rules
Template-based approach – filter out rules that do not fit the user specified templates
Subjective interestingness measure – filter out rules that are obvious (bread -> butter) and that are non-actionable (do not lead to profits)
19. Drawback of Confidence
20. 20 Objective Measures Confidence estimates rule quality in terms of the antecedent support but not consequent support
As seen on the previous slide, support for consequent (P(coffee)) is higher than the rule confidence (P(coffee/Tea))
Weka uses other objective measures
Lift (A->B) = confidence(A->B)/support(B) = support(A->B)/(support(A)*support(B))
Leverage (A->B) = support(A->B) – support(A)*support(B)
Conviction(A->B) = support(A)*support(not B)/support(A->B)
conviction inverts the lift ratio and also computes support for RHS not being true