300 likes | 515 Views
Document Classification using the Natural Language Toolkit. Ben Healey http://benhealey.info @BenHealey. Source: IStockPhoto. http://upload.wikimedia.org/wikipedia/commons/b/b6/FileStack_retouched.jpg. The Need for Automation. Take ur pick!.
E N D
Document Classification using the Natural Language Toolkit Ben Healey http://benhealey.info @BenHealey
http://upload.wikimedia.org/wikipedia/commons/b/b6/FileStack_retouched.jpghttp://upload.wikimedia.org/wikipedia/commons/b/b6/FileStack_retouched.jpg The Need for Automation
Take urpick! http://upload.wikimedia.org/wikipedia/commons/d/d6/Cat_loves_sweets.jpg
Features: - # Words - % ALLCAPS - Unigrams - Sender - And so on. Class: The Development Set Classification Algo. New Document (Class Unknown) Document Features Trained Classifier (Model) Classified Document.
Relevant NLTK Modules • Feature Extraction • from nltk.corpus import words, stopwords • from nltk.stem import PorterStemmer • from nltk.tokenize import WordPunctTokenizer • from nltk.collocations import BigramCollocationFinder • from nltk.metrics import BigramAssocMeasures • See http://text-processing.com/demo/ for examples • Machine Learning Algos and Tools • from nltk.classify import NaiveBayesClassifier • from nltk.classify import DecisionTreeClassifier • from nltk.classify import MaxentClassifier • from nltk.classify import WekaClassifier • from nltk.classify.util import accuracy
NaiveBayesClassifier http://61.153.44.88/nltk/0.9.5/api/nltk.classify.naivebayes-module.html
http://www.educationnews.org/commentaries/opinions_on_education/91117.htmlhttp://www.educationnews.org/commentaries/opinions_on_education/91117.html
517,431 Emails Source: IStockPhoto
Prep: Extract and Load • Sample* of 20,581 plaintext files • import MySQLdb, os, random, string • MySQL via Python ODBC interface • File, string manipulation • Key fields separated out • To, From, CC, Subject, Body * Folders for 7 users with a large number of email. So not representative!
Prep: Extract and Load • Allocation of random number • Some feature extraction • #To, #CCd, #Words, %digits, %CAPS • Note: more cleaning could be done • Code at benhealey.info
From: james.steffes@enron.com To: louise.kitchen@enron.com Subject: Re: Agenda for FERC Meeting RE: EOL Louise -- We had decided that not having Mark in the room gave us the ability to wiggle if questions on CFTC vs. FERC regulation arose. As you can imagine, FERC is starting to grapple with the issue that financial trades in energy commodities is regulated under the CEA, not the Federal Power Act or the Natural Gas Act. Thanks, Jim
From: pete.davis@enron.com To: pete.davis@enron.com Subject: Start Date: 1/11/02; HourAhead hour: 5; Start Date: 1/11/02; HourAhead hour: 5; No ancillary schedules awarded. No variances detected. LOG MESSAGES: PARSING FILE -->> O:\Portland\WestDesk\California Scheduling\ISO Final Schedules\2002011105.txt
Prep: Show us ur Features • NLTK toolset • from nltk.corpus import words, stopwords • from nltk.stem import PorterStemmer • from nltk.tokenize import WordPunctTokenizer • from nltk.collocations import BigramCollocationFinder • from nltk.metrics import BigramAssocMeasures • Custom code • def extract_features(record,stemmer,stopset,tokenizer): … • Code at benhealey.info
Prep: Show us ur Features • Features in boolean or nominal form if record['num_words_in_body']<=20: features['message_length']='Very Short' elif record['num_words_in_body']<=80: features['message_length']='Short' elif record['num_words_in_body']<=300: features['message_length']='Medium' else: features['message_length']='Long'
Prep: Show us ur Features • Features in boolean or nominal form text=record['msg_subject']+" "+record['msg_body'] tokens = tokenizer.tokenize(text) words = [stemmer.stem(x.lower()) for x in tokens if x not in stopset and len(x) > 1] for word in words: features[word]=True
Sit. Say. Heel. random.shuffle(dev_set) cutoff = len(dev_set)*2/3 train_set=dev_set[:cutoff] test_set=dev_set[cutoff:] classifier = NaiveBayesClassifier.train(train_set) print 'accuracy for > ',subject,':', accuracy(classifier, test_set) classifier.show_most_informative_features(10)
Performance: ‘IT’ Model IMPORTANT: These are ‘cheat’ scores!
Performance: ‘Deal’ Model IMPORTANT: These are ‘cheat’ scores!
Performance: ‘Social’ Model IMPORTANT: These are ‘cheat’ scores!
Don’t get burned. • Biased samples • Accuracy and rare events • Features and prior knowledge • Good modelling is iterative! • Resampling and robustness • Learning cycles http://www.ugo.com/movies/mustafa-in-austin-powers
Resources • NLTK: • www.nltk.org/ • http://www.nltk.org/book • Enron email datasets: • http://www.cs.umass.edu/~ronb/enron_dataset.html • Free online Machine Learning course from Stanford • http://ml-class.com/ (starts in October) • StreamHacker blog by Jacob Perkins • http://streamhacker.com