1 / 32

eCommerce Technology 20-751 Data Mining

eCommerce Technology 20-751 Data Mining. Coping with Information. Computerization of daily life produces data Point-of-sale, Internet shopping (& browsing), credit cards, banks . . . Info on credit cards, purchase patterns, product preferences, payment history, sites visited . . .

mcalister
Download Presentation

eCommerce Technology 20-751 Data Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. eCommerce Technology20-751Data Mining

  2. Coping with Information • Computerization of daily life produces data • Point-of-sale, Internet shopping (& browsing), credit cards, banks . . . • Info on credit cards, purchase patterns, product preferences, payment history, sites visited . . . • Travel. One trip by one person generates info on destination, airline preferences, seat selection, hotel, rental car, name, address, restaurant choices . . . • Data cannot be processed or even inspected manually

  3. Data Overload • Only a small portion of data collected is analyzed (estimate: 5%) • Vast quantities of data are collected and stored out of fear that important info will be missed • Data volume grows so fast that old data is never analyzed • Database systems do not support queries like • “Who is likely to buy product X” • “List all reports of problems similar to this one” • “Flag all fraudulent transactions” • But these may be the most important questions!

  4. PHOTO: LUCINDA DOUGLAS-MENZIES PHOTO: HULTON-DEUTSCH COLL Data Mining “The key in business is to know something that nobody else knows.” — Aristotle Onassis “To understand is to perceive patterns.” — Sir Isaiah Berlin

  5. Data Mining • Extracting previously unknown relationships from large datasets • summarize large data sets • discover trends, relationships, dependencies • make predictions • Differs from traditional statistics • Huge, multidimensional datasets • High proportion of missing/erroneous data • Sampling unimportant; work with whole population • Sometimes called • KDD (Knowledge Discovery in Databases) • OLAP (Online Analytical Processing)

  6. Taxonomy of Data Mining Methods Data MiningMethods Predictive Modeling Database Segmentation Link Analysis Text Mining Deviation Detection Semantic Maps • Decision Trees • Neural Networks • Naive Bayesian • Branching criteria • Clustering • K-Means Rule Association Visualization SOURCE: WELGE & REINCKE, NCSA

  7. Predictive Modeling • Objective: use data about the past to predict future behavior • Sample problems: • Will this (new) customer pay his bill on time? (classification) • What will the Dow-Jones Industrial Average be on October 15? (prediction) • Technique: supervised learning • decision trees • neural networks • naive Bayesian

  8. Honest Tridas Vickie Mike Crooked Barney Wally Waldo Predictive Modeling Which characteristics distinguish the two groups? SOURCE: WELGE & REINCKE, NCSA

  9. Vickie Mike Tridas Learned Rules in Predictive Modeling Honest = has round eyes and a smile SOURCE: WELGE & REINCKE, NCSA

  10. Rule Induction Example Data: height hair eyes class short blond blue A tall blond brown B tall red blue A short dark blue B tall dark blue B tall blond blue A tall dark brown B short blond brown B Devise a predictive rule to classify a new person as A or B SOURCE: WELGE & REINCKE, NCSA

  11. {tall, blue = A} short, blue = A tall, brown = B tall, blue = A short, brown = B short, blue = B tall, blue = B tall, brown= B Build a Decision Tree hair dark blond red Does not completely classify blonde-haired people. More work is required Completely classifies dark-haired and red-haired people SOURCE: WELGE & REINCKE, NCSA

  12. {tall, blue = A} short, blue = A tall, brown = B tall, blue = A short, brown = B short, blue = B tall, blue = B tall, brown= B short = A tall = A Build a Decision Tree hair dark blond red Decision tree is complete because 1. All 8 cases appear at nodes 2. At each node, all cases are in the same class (A or B) eye blue brown tall = B short = B SOURCE: WELGE & REINCKE, NCSA

  13. hair dark blond red B A eyes blue brown A B Learned Predictive Rules SOURCE: WELGE & REINCKE, NCSA

  14. height tall short eyes eyes blue blue brown brown hair hair B B blonde blonde dark dark red B A B A B Decision Trees • Good news: a decision tree can always be built from training data • Any variable can be used at any level of the tree • Bad news: every data point may wind up at a leaf (tree has not compressed the data) 8 cases, 7 nodes. This tree has not summarized the data effectively

  15. Database Segmentation (Clustering) • “The art of finding groups in data” Kaufman & Rousseeuw • Objective: gather items from a database into sets according to (unknown) common characteristics • Much more difficult than classification since the classes are not known in advance (no training) • Examples: • Demographic patterns • Topic detection (words about the topic often occur together) • Technique: unsupervised learning

  16. Clustering Example • Are there natural clusters in the data (36,10), (12,8), (38,42), (13,6), (36,38), (16,9), (40,36), (35,19), (37,7), (39,8)?

  17. Clustering • K-means algorithm • To divide a set into K clusters • Pick K points at random. Use them to divide the set into K clusters based on nearest distance • Loop: • Find the mean of each cluster. Move the point there. • Redefine the clusters. • If no point changes cluster, done • K-means demo • Agglomerative clustering:start withNclusters&merge • Agglomerative clustering demo

  18. Neuron computes a linear function of the inputs n INPUTS x1, …, xn 1 OUTPUT yj depends only on the linear function n WEIGHTS w1j , …, wnj Neural Networks Networks of processing units called neurons. This is the j th neuron: Neurons are easy to simulate SOURCE: CONSTRUCTING INTELLIGENT AGENTS WITH JAVA

  19. Neural Networks HIDDEN LAYER INPUT LAYER OUTPUT LAYER INPUTS: 1 PER INPUT LAYER NEURON OUTPUTS: 1 PER OUTPUT LAYER NEURON DISTINGUISHED OUTPUT (THE “ANSWER”)

  20. DEVIATION Neural Networks Learning through back-propagation 1. Network is trained by giving it many inputs whose output is known 2. Deviation is “fed back” to the neurons to adjust their weights 3. Network is then ready for live data SOURCE: CONSTRUCTING INTELLIGENT AGENTS WITH JAVA

  21. Neural Network Classification “Which factors determine a pet’s favorite food?” Species = Dog food: Chum Breed = Mixed food: Mr. Dog Owner’s age > 45 Owner’s sex = F

  22. Neural Network Demos • Demo: Notre Dame football, Automated surveillance, Handwriting analyzer • Financial applications: • Churning: are trades being instituted just to generate commissions? • Fraud detection in credit card transactions • Kiting: isolate float on uncollected funds • Money Laundering: detect suspicious money transactions (US Treasury's Financial Crimes Enforcement Network) • Insurance applications: • Auto Insurance: detect a group of people who stage accidents to collect on insurance • Medical Insurance: detect professional patients and ring of doctors and ring of references

  23. Rule Association • Try to find rules of the form IF <left-hand-side> THEN <right-hand-side> • (This is the reverse of a rule-based agent, where the rules are given and the agent must act. Here the actions are given and we have to discover the rules!) • Prevalence = probability that LHS and RHS occur together (sometimes called “support factor,” “leverage” or “lift”) • Predictability = probability of RHS given LHS (sometimes called “confidence” or “strength”)

  24. Association Rules fromMarket Basket Analysis • <Dairy-Milk-Refrigerated>  <Soft Drinks Carbonated> • prevalence = 4.99%, predictability = 22.89% • <Dry Dinners - Pasta>  <Soup-Canned> • prevalence = 0.94%, predictability = 28.14% • <Paper Towels - Jumbo>  <Toilet Tissue> • prevalence = 2.11%, predictability = 38.22% • <Dry Dinners - Pasta>  <Cereal - Ready to Eat> • prevalence = 1.36%, predictability = 41.02% • <American Cheese Slices >  <Cereal - Ready to Eat> • prevalence = 1.16%, predictability = 38.01%

  25. Use of Rule Associations • Coupons, discounts • Don’t give discounts on 2 items that are frequently bought together. Use the discount on 1 to “pull” the other • Product placement • Offer correlated products to the customer at the same time. Increases sales • Timing of cross-marketing • Send camcorder offer to VCR purchasers 2-3 months after VCR purchase • Discovery of patterns • People who bought X, Y and Z (but not any pair) bought W over half the time

  26. Finding Rule Associations • Example: grocery shopping • For each item, count # of occurrences (say out of 100,000) apples 1891, caviar 3, ice cream 1088, pet food 2451, … • Drop the ones that are below a minimum support level apples 1891, ice cream 1088, pet food 2451, … • Make a table of each item against each other item: • Discard cells below support threshold. Now make a cube for triples, etc. Add 1 dimension for each product on LHS.

  27. Rule Association Demos • Magnum Opus (RuleQuest, free download) • See5/C5.0 (RuleQuest, free download) • Cubist numerical rule finder (RuleQuest, free download) • IBM Interactive Miner

  28. Text Mining • Objective: discover relationships among people & things from their appearance in text • Topic detection, term detection • When has a new term been seen that is worth recording? • Generation of “knowledge map”, a graph representing terms/topics and their relationships • SemioMap demo (Semio Corp.) • Phrase extraction • Concept clustering (through co-occurrence) not by document • Graphic navigation (link means concepts co-occur) • Processing time: 90 minutes per gigabyte • Summary server (inxight.com)

  29. Catalog Mining SOURCE: TUPAI SYSTEMS

  30. Visualization • Objective: produce a graphic view of data so it become understandable to humans • Hyperbolic trees • SpotFire (free download from www.spotfire.com) • SeeItIn3D • TableLens • OpenViz

  31. Major Ideas • There’s too much data • We don’t understand what it means • It can be handled without human intervention • Relationships can be discovered automatically

  32. Q A &

More Related