520 likes | 723 Views
IFT6255: Information Retrieval A synthesis, analysis and comparison of text classification algorithms Ligen Wang Jing Bai. Overview. Definition of text classification Important processes in classification Classification algorithms Advantages and disadvantages of algorithms
E N D
IFT6255: Information RetrievalA synthesis, analysis and comparison of text classification algorithmsLigen WangJing Bai
Overview • Definition of text classification • Important processes in classification • Classification algorithms • Advantages and disadvantages of algorithms • Performance comparison of algorithms • Conclusion
Text Classification • Text classification (text categorization): assign documents to one or more predefined categories classes Documents ? class1 class2 . . . classn
Illustration of Text Classification Science Sport Art
Applications of Text Classification • Organize web pages into hierarchies • Domain-specific information extraction • Sort email into different folders • Find interests of users • Etc.
Text Classification Framework Documents Preprocessing Indexing Performance measure Applying classification algorithms Feature selection
Preprocessing • Preprocessing: transform documents into a suitable representation for classification task • Remove HTML or other tags • Remove stopwords • Perform word stemming (Remove suffix)
Indexing • Indexing by different weighing schemes: • Boolean weighing • Word frequency weighing • tf*idf weighing • ltc weighing • Entropy weighing
Feature Selection • Feature selection: remove non-informative terms from documents =>improve classification effectiveness =>reduce computational complexity
Different Feature Selection Methods • Document Frequency Thresholding (DF) • Information Gain (IG) • 2statistic (CHI) • Mutual Information (MI) • Term Strength (TS)
Classification Algorithms • Rocchio’s algorithm • K-Nearest-Neighbor algorithm (KNN) • Decision Tree algorithm (DT) • Naive Bayes algorithm (NB) • Artificial Neural Network (ANN) • Support Vector Machine (SVM) • Voting algorithms
Rocchio’s Algorithm • Build prototype vector for each class prototype vector: average vector over all training document vectors that belong to class ci • Calculate similarity between test document and each of prototype vectors • Assign test document to the class with maximum similarity
Analysis of Rocchio’s Algorithm • Advantages: • Easy to implement • Very fast learner • Relevance feedback mechanism • Disadvantages: • Low classification accuracy • Linear combination too simple for classification • Constant and are empirical
K-Nearest-Neighbor Algorithm • Principle: points (documents) that are close in the space belong to the same class
K-Nearest-Neighbor Algorithm • Calculate similarity between test document and each neighbor • Select k nearest neighbors of a test document among training examples • Assign test document to the class which contains most of the neighbors
Analysis of KNN Algorithm • Advantages: • Effective • Non-parametric • More local characteristics of document are considered comparing with Rocchio • Disadvantages: • Classification time is long • Difficult to find optimal value of k
Decision Tree Algorithm • Decision tree associated with document: • Root node contains all documents • Each internal node is subset of documents separated according to one attribute • Each arc is labeled with predicate which can be applied to attribute at parent • Each leaf node is labeled with a class
Decision Tree Algorithm • Recursive partition procedure from root node • Set of documents separated into subsets according to an attribute • Use the most discriminative attribute first • Pruning to deal with overfitting
Analysis of Decision Tree Algorithm • Advantages: • Easy to understand • Easy to generate rules • Reduce problem complexity • Disadvantages: • Training time is relatively expensive • A document is only connected with one branch • Once a mistake is made at a higher level, any subtree is wrong • Does not handle continuous variable well • May suffer from overfitting
Naïve Bayes Algorithm • Estimate the probability of each class for a document: • Compute the posterior probability (Bayes rule) • Assumption of word independency
Naïve Bayes Algorithm • P(Ci): • P(dj|ci):
Analysis of Naïve Bayes Algorithm • Advantages: • Work well on numeric and textual data • Easy to implement and computation comparing with other algorithms • Disadvantages: • Conditional independence assumption is violated by real-world data, perform very poorly when features are highly correlated • Does not consider frequency of word occurrences
Basic Neuron Model In A Feedforward Network • Inputsxiarrive through pre-synaptic connections • Synaptic efficacy is modeled using real weights wi • The response of the neuron is a nonlinear functionf of its weighted inputs
Inputs To Neurons • Arise from other neurons or from outside the network • Nodes whose inputs arise outside the network are called input nodes and simply copy values • An input mayexcite or inhibitthe response of the neuron to which it is applied, depending upon the weight of the connection
Weights • Represent synaptic efficacy and may be excitatory or inhibitory • Normally, positive weights are considered as excitatory while negative weights are thought of as inhibitory • Learning is the process of modifying the weights in order to produce a network that performs some function
Output • The response function is normally nonlinear • Samples include • Sigmoid • Piecewise linear
Backpropagation Preparation • Training SetA collection of input-output patterns that are used to train the network • Testing SetA collection of input-output patterns that are used to assess network performance • Learning Rate-ηA scalar parameter, analogous to step size in numerical integration, used to set the rate of adjustments
Network Error • Total-Sum-Squared-Error (TSSE) • Root-Mean-Squared-Error (RMSE)
A Pseudo-Code Algorithm • Randomly choose the initial weights • While error is too large • For each training pattern • Apply the inputs to the network • Calculate the output for every neuron from the input layer, through the hidden layer(s), to the output layer • Calculate the error at the outputs • Use the output error to compute error signals for pre-output layers • Use the error signals to compute weight adjustments • Apply the weight adjustments • Periodically evaluate the network performance
Feedforward Outputs Inputs Apply Inputs From A Pattern • Apply the value of each input parameter to each input node • Input nodes computer only the identity function
Feedforward Outputs Inputs Calculate Outputs For Each Neuron Based On The Pattern • The output from neuron j for pattern p is Opj where and k ranges over the input indices and Wjk is the weight on the connection from input k to neuron j
Calculate The Error Signal For Each Output Neuron • The output neuron errorsignaldpjis given by dpj=(Tpj-Opj) Opj (1-Opj) • Tpj is the target value of output neuron j for pattern p • Opjis the actual output value of output neuron j for pattern p
Calculate The Error Signal For Each Hidden Neuron • The hidden neuron error signaldpjis given by where dpkis the error signal of a post-synaptic neuron k and Wkj is the weight of the connection from hidden neuron j to the post-synaptic neuron k
Calculate And Apply Weight Adjustments • Compute weight adjustments DWji byDWji = ηdpj Opi • Apply weight adjustments according toWji = Wji + DWji
Analysis of ANN Algorithm • Advantages: • Produce good results in complex domains • Suitable for both discrete and continuous data (especially better for the continuous domain) • Testing is very fast • Disadvantages: • Training is relatively slow • Learned results are difficult for users to interpret than learned rules (comparing with DT) • Empirical Risk Minimization (ERM) makes ANN try to minimize training error, may lead to overfitting
X f(X) f(x) f(x) x 0 f(0) f(x) x x f(0) 0 0 f(x) f(0) x f(0) 0 X F Support Vector Machines • Main idea of SVMs • Find out the linear separating hyperplane which maximize the margin, i.e., the optimal separating hyperplane (OSH) • Nonlinear separable case • Kernel function and Hilbert space
SVM classification Maximizing the margin is equivalent to: Introducing Lagrange multipliers , the Lagrangian is: Dual problem: subject to: The solution is given by: The problem of classifying a new data point x is now simply solved by looking at the sigh of
x x x x x x 0 0 x x 0 0 0 0 0 0 Analysis of SVM Algorithm • Advantages: • Comparing with ANN, SVM capture the inherent characteristics of the data better • Embedding the Structural Risk Minimization (SRM) principlewhich minimizes the upper bound on the generalization error (better than the Empirical Risk Minimization principle) • Ability to learn can be independent of the dimensionality of the feature space • Global minima vs. local minima • Disadvantage: • Parameter tuning • kernel selection
Voting Algorithm Principle: using multiple evidence (multiple poor classifiers=> single good classifier) • Generate some base classifiers • Combine them to make the final decision
Bagging Algorithm • Use multiple versions of a training set D of size N, each created by resampling N examples from D with bootstrap • Each of data sets is used to train a base classifier, the final classification decision is made by the majority voting of these classifiers
Adaboost • Main idea: • The main idea of this algorithm is to maintain a distribution or set of weights over the training set. Initially, all weights are set equally, but in each iteration the weights of incorrectly classified examples are increased so that the base classifier is forced to focus on the ‘hard’ examples in the training set. For those correctly classified examples, their weights are decreased so that they are less important in next iteration. • Why ensembles can improve performance: • Uncorrelated errors made by the individual classifiers can be removed by voting. • Our hypothesis space H may not contain the true function f. Instead, H may include several equally good approximations to f. By taking weighted combinations of these approximations, we may be able to represent classifiers that lie outside of H.
Adaboost algorithm Given: m examples where for all i = 1…m Initialize • For t = 1,…,T: • Train base classifier using distribution • Get a hypothesis with error • Choose . • Update: where is a normalization factor (chosen so that will be a distribution). Output the final hypothesis:
Analysis of Voting Algorithms • Advantage: • Surprisingly effective • Robust to noise • Decrease the overfitting effect • Disadvantage: • Require more calculation and memory
Performance Measure • Performance of algorithm: • Training time • Testing time • Classification accuracy • Precision, Recall • Micro-average / Macro-average • Breakeven: precision = recall Goal: high classification quality and computation efficiency
Comparison Based on Six Classifiers • Classification accuracy: six classifiers (Reuters-21578 collection)
Analysis of Results • SVM, Voting and KNN are showed good performance • DT, NB and Rocchio showed relatively poor performance
Comparison Based on Feature Selection • Classification accuracy: NB vs. KNN vs. SVM (Reuter collection)
Analysis of Results • Accuracyis improved with an increase in the number of features until some level • Top level = approximately 500-1000 features: accuracy reaches its peak and begins to decline • SVM obtains the best performance
Comparison Based on Training Time (1) • Training time: SVM vs. NB (# features = 100):
Comparison Based on Training Time (2) • Training time: SVM vs. NB (# of features increasing):