1 / 24

Learning to Question: Leveraging User Preferences for Shopping Advice

Learning to Question: Leveraging User Preferences for Shopping Advice. Author: Mahashweta Das, Aristides Gionis, De Francisci Morales, Ingmar Weber Present by: Wei Zhu EECS, Case Western Reserve University. Introduction. E-commerce VS Traditional Way Shopping Without any expert’s help

keith
Download Presentation

Learning to Question: Leveraging User Preferences for Shopping Advice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Learning to Question: Leveraging User Preferences for Shopping Advice Author: Mahashweta Das, Aristides Gionis, De Francisci Morales, Ingmar Weber Present by: Wei Zhu EECS, Case Western Reserve University

  2. Introduction • E-commerce VS Traditional Way Shopping • Without any expert’s help • Online shops are big • Products update very fast • This paper present a novel recommender system to help users in shopping online by leveraging the user preferences and technical attributes of products. • Shopping Advisor

  3. Outline • Background and inspiration • Problem definition • Method and algorithms • Experiments • Conclusion

  4. Background • Marketing Strategy “Which product should I buy?” • By asking some questions, leading to suggested shopping options. • Drawbacks • Manually design question flowchart is time consuming • Some technical attributes are not easy to be understand

  5. Inspiration • Shopping assistant • Do you intend to use to laptop to play modern games? • Instead of “How many GB of Ram do you need?” • User information can map the technical attributes to the features that non-expert can understand • Technical information can evaluate a product in different ranking in different using preference

  6. Example

  7. Problem Definition • Given a product table P, a review table R, a user table U and integer h and k, to learn a shopping advisor tree T, which can provide relevant recommendations. • Each internal node of the tree contains a question formed by a user information. • Each node contains a top-k ranked list of products • A shopper can start from the root and traverse down the tree, answering some questions. Then receive a top-k recommendations of products.

  8. Example

  9. Method • General framework for solving the Shopping Advisor problem • Payoff function: used to choose the best question to ask at any node of the Shopping Advisor tree • Rank function: used to determine the ranking of the products recommended to user

  10. Learning the tree structure • Each Node of the Tree = attribute of the user • Given a user attribute a, the users U can be split into two group: • :match the attribute a • :do not match the attribute a • The user group corresponds to each node match all the attributes on the path from root to the node

  11. Payoff function • To determine which attribute to split Uqat node q, this paper present a function called payoff • The idea here is to split the Uq into to sub groups, the users of which have similar preferences and rank the products in a similar way.

  12. Learning product rankings • Rank the products in P at a given node q • Input: Uq, Product table P, Review table corresponding to UqRq • The goal is to learn a weight vector w={w1,…,wmp} • RankSVMmethed • Optimizing Objective function:

  13. Evaluate function • Measure the number of correctly-ranked pairs in the ranking generated for the products Pq at node q. • Minimizing the number of inversions is the most common way to optimize a pairwise learning-to-rank function, which is appropriate to RankSVM

  14. Experiments • Evaluate with both real and synthetic data • Compare the Shopping Advisor system with baseline system which not leveraging user performance • Performance evaluation • Example

  15. Datasets • Cars: extract from Yahoo! Autos • 2180 users with 15 tags • 606 products with 60 attributes • 2180 reviews • Camera: extract from flickr • 5647 users with 25 attributes (tag topic) • 654 cameras (attributes from CNET) • 11468 • Synthetic • 1000 users with 20 tags • 200 products with 20 attributes • 4000 reviews

  16. Experiment setup • Ten-fold cross-validation • Partition each of the datasets into training and test sets • Mean reciprocal rank (MRR) • How far from the top of the list the relevant item is. • Ranki is the position of the i-th test user’s relevant item

  17. Quality evaluation • Baseline: RankSVM, (returned rank list at the root of Shopping Advisor) • Useful to existing recommendation techniques • k-NN: return a ranked list of items by aggregating the item lists for top neighbor users • SA.k-NN: use Shopping Advisor to select features and weights, then perform k-NN

  18. Comparison • SA increase the quality by 50% • SA.k-NN can reach the same quality by asking less questions

  19. Performance • RankSVM is expensive operation • Materialize a preference Matrix for all training instance to reduce the training time

  20. Example of SA trees Observation: Include fuel economy: recommend hybrid and ecoboost cars Exclusion fuel economy: audi(compromise mileage for performance) Popularity: jeep grand cherokee

  21. Example of SA trees Observation: For events: Olympus E-3 (lightweight) For people: Canon EOS 30D (auto image rotation function) Popularity: Canon EOS Digital Rebel XS

  22. Conclusion • The system proposed by this paper can automatically build a question flowchart to help users find the recommendation products they need. • Besides, the idea of this paper can benefit other filtering methods such as k-NN

  23. Pros and Cons • Pros • Building the decision tree from user feature space can make the node understandable to non-expert. • The idea can be applied to other existing method and improve the performance and quality • Cons • Popular products have high possibility to be recommend, but may not completely appropriate to the user’s need • Materializing a preference matrix takes several days.

  24. Thank You!

More Related