570 likes | 820 Views
Am Introduction to Recommendation Systems. Hasan Davulcu CIPS, ASU. Recommendation Systems. U: C X S R C: profile: age, gender, income S: title, director, actor, year, genre. Recommendations:. Recommender Systems. Content Based. Limitations. Too Similar ! New user problem.
E N D
Am Introduction to Recommendation Systems Hasan Davulcu CIPS, ASU
Recommendation Systems U: C X S R C: profile: age, gender, income S: title, director, actor, year, genre Recommendations:
Limitations • Too Similar ! • New user problem
User - Collaborative Methods • U(C,S) is estimated based on utilities U(Cj,S) by those users Cj who are “similar” to user C.
Limitations • New Item Problem • Sparsity
Comparing Human Recommenders to Online Systems Rashmi Sinha & Kirsten Swearingen SIMS, UC Berkeley
Which one should I read? Recommender Systems are technological proxy for a social process Recommendations from friends Recommendations from Online Systems
I know what you’ll read next summer (Amazon, Barnes&Noble) • what movies you should watch… (Reel, RatingZone, Amazon) • what music you should listen to… (CDNow, Mubu, Gigabeat) • what websites you should visit (Alexa) • what jokes you will like (Jester) • & who you should date (Yenta)
Method Philosophy: Testing & Analysis as part of the Iterative Design Process Design Evaluate Use both quantitative & qualitative methods Analyze Generate Design Recommendations Slide adapted from James Landay
User incurs cost in using system: • Time, Effort, Privacy Issues Input Receives Recommendation • Cost in reviewing recommendations Judges if he/she will sample recommendation Benefit only if recommended item appeals Taking a closer look at the Recommendation Process
Amazon’s Recommendation Process • Input: One artist/author name
Search using Recommendations • Output: List of Recommendations • Explore / Refine Recommendations
Book Recommendation Site: Sleeper • Input: Ratings of 10 books for all users • Use of continuous Rating Bar (System designed by Ken Goldberg)
Sleeper: Output • Output: List of items with brief information about each item • Degree of confidence in prediction
What convinces a user to sample the recommendation • Judging recommendations: • What is a good recommendation from the user’s perspective? • Trust in a Recommender System: • What factors lead to trust in a system? • System Transparency: • Do users need to know why an item was recommended?
Social Recommendations Study of RS has focused mostly on Collaborative Filtering Algorithms Input from user Output (Recommendations) Collaborative Filtering Algorithms
Beyond “Algorithms Only” : An HCI Perspective on Recommender Systems • Comparing the Social Recommendation Process to Online Recommender Systems • Understanding the factors that go into an effective recommendation (by studying users interaction with 6 online RS)
Book Systems Amazon Books Rating Zone Sleeper
Movie Systems Amazon Movies Movie Critic Reel
Method • 19 participants, age:18 to 34 years • For each of 3 online systems: • Registered at site • Rated items • Reviewed and evaluated recommendation set • Completed questionnaire • Also reviewed and evaluated sets of recommendations from 3 friends each
USEFUL Not yet read/viewed Defining Types of Recommendations GOOD: User likes Good Recs. (Precision) • % items user felt interested in Useful Recs. • Subset of Good Recs. • User felt interested in and had not read / viewed yet
Comparing Human Recommenders to RS: “Good” and “Useful” Recommendations % Good Recommendations 100% % Useful Recommendations 90% 80% 70% 60% 50% 40% 30% 20% 10% 0 Amazon (15) Sleeper (10) Friends (9) Rating Zone (8) Amazon (15) Reel (5-10) Movie Critic (20) Friends (9) Movies Books (x)No. of Recommendations RS Average Ave. Std. Error
However users like online RS This result was supported by post test interviews.
Why systems over friends? • “Suggested a number of things I hadn’t heard of, interesting matches.” • “It was like going to Cody’s—looking at that table up front for new and interesting books.” • “Systems can pull from a large database—no one person knows about all the movies I might like.”
Items users had “Heard of” before Movies Books Friends recommended mostly “old” previously experienced items
What systems did users prefer? Yes • Sleeper and Amazon books average highest ratings • Split opinions on Reel, MovieCritic No Movies Books
Why did some systems… • Provide useful recommendations but leave users unsatisfied? • RatingZone, MovieCritic & Reel
Possible Reasons • Previously Enjoyed Items are important: We term these Trust-Generating Items • Adequate Item Description & Ease of Use are important • Missing from List:Time to Receive Recommendations & No. of Items to Rate not important! All correlations are significant at .05
USEFUL Not yet read/viewed A Question of Trust… GOOD: User likes Post Test Interviews showed that users “trust” systems if they have already sampled some recommendations • Positive Experiences lead to “trust • Negative Experiences with Recommended Items lead to mistrust of system TRUST-GENERATING Previously read/viewed
A Question of Trust … Movies Books Difference between Amazon and Sleeper highlights the fact that there are different kinds of good Recommender Systems
Adequate Item Description: The RatingZone Story 0 % of Version 1 and 60% of Version 2 users found item description adequate An adequate item description, and links to other sources about item was a crucial factor in users being convinced by a recommendation.
System Transparency • Why was this item recommended? • Do users understand why an item was recommended Users mentioned this factor in post test interviews
Design Recommendations: Justification • Justify your Recommendations • Adequate Item Information: Providing enough detail about item for user to make choice • System Transparency: Generate (at least some) recommendations which are clearly linked to the rated items • Explanation: Provide an Explanation, why the item was recommended. • Community Ratings: Provide link to ratings / reviews by other users. If possible, present numerical summary of ratings.
Design Recommendations:Accuracy vs. Less Input • Don’t sacrifice accuracy for the sake of generating quick recommendations. Users don’t mind rating more items to receive quality recommendations. • A possible way to achieve this: have multilevel recommendations. Users can initially use the system by providing one rating, and are offered subsequent opportunities to refine recommendation • One needs a happy medium between too little input (leading to low accuracy) and too much input (leading to user impatience)
Design Recommendations: New Unexpected Items Design Recommendations: New Unexpected Items • Users like Rec. Systems as they provide information about new, unexpected items. • List of recommended items should include new items which the user might not find out in any other way. • List could also include some unexpected items (e.g., from other topics / genres) which the user might not have thought of themselves.
Design Recommendations: Trust Generating Items • Users (especially first time users) need to develop trust in the system. • Trust in system is enhanced by the presence of items that the user has already enjoyed. • Generating some very popular (which have probably been experienced previously) in the initial recommendation set might be one way to achieve this.
Design Recommendations: Mix of Items • Systems need to provide a mix of different kinds of items to cater to different users: • Trust Generating Items: A few very popular ones, which the system has high confidence in • Unexpected Items: Some unexpected items, whose purpose is to allow users to broaden horizons. • Transparent Items: At least some items for which the user can see the clear link between the items he /she rated and the recommendation. • New Items: Some items which are new. Question: Should these be presented as a sorted list / unsorted list/ different categories of recommendations?
Design Recommendations: Continuous Scales for Input • Allow users to provide ratings on a continuous scale. • One of the reasons users liked Sleeper was because it allowed them to rate on a continuous scale. Users did not like binary scales.
Limitations of Study • Simulated first-time visit, did not allow system to learn user preferences over time • Source of recommendations known to subjects—might have biased towards friends • Fairly homogenous group of subjects, no novice users
Future Plans: Second Generation Music Recommender Systems • Have evolved beyond previous systems • Use a variety of sophisticated algorithms to map users preferences over music domain • Require a lot more input from the user • Users can sample recommendations during the study!