370 likes | 521 Views
MACHINE LEARNING. What is learning?. A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) Any change in a system that allows it to perform better (Simon 1983). What do we learn :. Descriptions
E N D
What is learning? • A computer program learns if it improves its performance at some task through experience (T. Mitchell, 1997) • Any change in a system that allows it to perform better (Simon 1983)
What do we learn: Descriptions Rules how to recognize/classify objects, states, events Rules how to transform an initial situation to achieve a goal (final state)
How do we learn: • Rote learning - storage of computed information. • Taking advice from others. (Advice may need to be operationalized.) • Learning from problem solving experiences - remembering experiences and generalizing from them. (May add efficiency but not new knowledge.) • Learning from examples. (May or may not involve a teacher.) • Learning by experimentation and discovery. (Decreasing burden on teacher, increasing burden on learner.)
Approaches to Machine Learning • Symbol-based • Connectionist Learning • Evolutionary learning
Inductive Symbol-Based Machine Learning Concept Learning • Version space search • Decision trees: ID3 algorithm • Explanation-based learning • Supervised learning • Reinforcement learning
Version space search for concept learning • Concepts – describe classes of objects • Concepts consist of feature sets • Operation on concept descriptions • Generalization:Replace a feature with a variable • Specialization:Instantiate a variable with a feature
Positive and Negative examples of a concept • The concept description has to match all positive examples • The concept description has to be false for the negative examples
Plausible descriptions • The version space represents all the alternative plausible descriptions of the concept • A plausible description is one that is applicable to all known positive examples and no known negative example.
Basic Idea • Given: • A representation language • A set of positive and negative examples expressed in that language • Compute:A concept description that is consistent with all the positive examples and none of the negative examples
Hypotheses The version space contains two sets of hypotheses: G – the most general hypotheses that match the training data S – the most specific hypotheses that match the training data Each hypothesis is represented as a vector of values of the known attributes
Example of Version space Consider the task to obtain a description of the concept: Japanese Economy car. The attributes under consideration are: Origin, Manufacturer, Color, Decade, Type training data: Positive ex: (Japan, Honda, Blue, 1980, Economy) Positive ex: (Japan, Honda, White, 1980, Economy) Negative ex: (Japan, Toyota, Green, 1970, Sports)
Example continued The most general hypothesis that matches the positive data and does not match the negative data, is: (?, Honda, ?, ?, Economy) the symbol ‘?’ means that the attribute may take any value The most specific hypothesis that matches the positive examples is: (Japan, Honda, ?,1980, Economy)
Algorithm: Candidate elimination • Initialize G to contain one element: the most general description (all features are variables). • Initialize S to empty. • Accept a new training example.
Process positive examples • Remove from G any descriptions that do not cover the example. • Generalize S as little as possible so that the new training example is covered. • Removefrom S all elements that cover negative examples.
Process negative examples • Remove from S any descriptions that cover the negative example. Specialize G as little as possible so that the negative example is not covered. • Remove from G all elements that do not cover the positive examples.
Algorithm continued Continue processing new training examples, until one of the following occurs: • Either S or G become empty, there are no consistent hypotheses over the training space. Stop. • S and G are both singleton sets. • if they are identical, output their value and stop. • if they are different, the training cases were inconsistent. Output this result and stop. • No more training examples. G hasseveral hypotheses. The version space is a disjunction of hypotheses. If for a new example the hypotheses agree, then we can classify the example. If they disagree we can take the majority vote
Learning the concept of "Japanese economy car" • Features: • Origin, Manufacturer, Color, Decade, Type • POSITIVE EXAMPLE: • (Japan, Honda, Blue, 1980, Economy) • Initialize G to singleton set that includes everything • Initialize S to singleton set that includes first positive example G = {(?, ?, ?, ?, ?)} S = {(Japan, Honda, Blue, 1980, Economy)}
Example continued • NEGATIVE EXAMPLE: • (Japan, Toyota, Green, 1970, Sports) • Specialize G to exclude negative example • G = {(?, Honda, ?, ?, ?), (?, ?, Blue, ?, ?) (?, ?, ?, 1980, ?) (?, ?, ?, ?, Economy)} • S = {(Japan, Honda, Blue, 1980, Economy)}
Example continued • POSITIVE EXAMPLE: • (Japan, Toyota, Blue, 1990, Economy) • Remove from G descriptions inconsistent with positive example • Generalize S to include positive example G = { (?, ?, Blue, ?, ?) (?, ?, ?, ?, Economy)} S = {(Japan, ?, Blue, ?, Economy)}
Example continued • NEGATIVE EXAMPLE: • (USA, Chrysler, Red, 1980, Economy) • Specialize G to exclude negative example (but staying within version space, i.e., staying consistent with S) G = {(?, ?, Blue, ?, ?) (Japan, ?, ?, ?, Economy)} S = {(Japan, ?, Blue, ?, Economy)}
Example continued • POSITIVE EXAMPLE: • (Japan, Honda, White, 1980, Economy) • Remove from G descriptions inconsistent with positive example • Generalize S to include the positive example G = {(Japan, ?, ?, ?, Economy)} S = {(Japan, ?, ?, ?, Economy)} • S = G, both singleton => done!
Decision trees • A decision tree is a structure that represents a procedure for classifying objects based on their attributes. • Each object is represented as a set of attribute/value pairs and a classification.
Example A set of medical symptoms might be represented as follows: Cough Fever Weight Pain Classification Mary no yes normal throat flu Fred no yes normal abdomen appendicitis Julie yes yes skinny none flu Elvis yes no obese chest heart disease The system is given a set of training instances along with their correct classifications and develops a decision tree based on these examples.
Attributes • If a crucial attribute is not represented, then no decision tree will be able to learn the concept. • If two training instances have the same representation but belong to different classes, then the attribute set is said to be inadequate. It is impossible for the decision tree to distinguish the instances.
ID3 Algorithm (Quinlan, 1986) ID3(R, C, S) // R – list of attributes, // C – categorical attribute, S - examples • If all examples from S belong to the same class Cj , return a leaf labeled Cj • If R is empty return a node with the most frequent value of C • Else • select the “best” decision attribute A in R with values v1, v2, …, vn for next node • divide the training set S into S1, …, Sn according to values v1,…,vn • Call ID3 (R – {A}, C, S1), ID3(R – {A}, C, S2), … ID3(R – {A}, C, Sn), i.e. recursively build subtrees T1, …, Tn for S1, …, Sn • Return a node labelledA with children the subtrees T1, T2, … Tn
Entropy • S- a sample of training examples • Entropy (S ) = expected number of bits needed to encode the classification of an arbitrary member of S • Information theory: optimal length code assigns -log2 p bits to message having probability p • Generally for c different classes Entropy(S) c(- pi* log2 pi)
Entropy of the Training Set • T : a set of records partitioned into C1, C2, …, Ck on the bases of the categorical attribute C. • Probability of each class Pi = Ci / T • Info(T) = -p1*Log(P1) - … - Pk*log(Pk) Info (T) is the information needed to classify an element.
How much helpful is an attribute? • X : a non-categorical attribute, • T = {T1,…,Tn} is the split of T according to X The entropy of each Tk is: Info(Tk) = - (Tk1 / Tk)* log(Tk1 / Tk) - … - (T kc / Tk)*log(Tkc / Tk ) where c is the number of partitions in Tk produced by the categorical attribute C For any k, Info(Tk) reflects how the categorical attribute C splits the set Tk
Information Gain Info(X,T) =T1/T * Info(T1) + T2/T * Info(T2) + …. + Tn /T * Info(Tn) Gain(X,T) = Info(T) – Info(X,T) = Entropy(T) - i (Ti/T)*Entropy(Ti)
Information Gain • Gain(X,T) - the expected reduction in entropy caused by partitioning the examples of T according to the attribute X. • Gain(X,T) - a measure of the effectiveness of an attribute in classifying the training data • The best attribute has maximal Gain(X,T)
Example (2) • Attribute “hair” Blonde: T1 = {Sara, Dana, Annie, Katie} Brown: T2 = {Alex, Pete, John} Red: T3 = { Emily} • T1 is split by C into 2 sets: • T11 = {Sarah, Annie}, T12 = {Dana, Katie} • Info(T1) = - 2/4 * log(2/4) – 2/4* log(2/4) = -log(1/2) = 1 • In a similar way we compute • Info(T2) = 0, Info(T3) = 0 • Info(‘hair’,T) = T1/T * Info(T1) + T2/T * Info(T2) + T3 /T *Info(T3) = 4/8 * Info(T1) + 3/8* Info(T2) + 1/8 * Info(T3) = = 4/8 * 1 = 0.50 This happens to be the best attribute
Hair color blonde red brown Lotion yes no sunburn none none sunburn Example (3)
Split Ratio • GainRatio(D,T) = Gain(D,T) / SplitInfo(D,T) where SplitInfo(D,T) is the information due to the split of T when D is considered categorical attribute
lotion no yes Color blonde red brown none none none sunburn Split Ratio Tree