290 likes | 416 Views
User Modeling and Recommendations – Part 2. Many slides adapted from Lora Aroyo http :// de.slideshare.net / laroyo. User Modeling Basic Concepts.
E N D
User Modeling andRecommendations– Part 2 Manyslidesadaptedfrom Lora Aroyohttp://de.slideshare.net/laroyo
User Modeling Basic Concepts • User Profile: a datastructurethatrepresents a characterizationofa userat a particularmomentof time representswhat, froma given (system) perspective, thereistoknowabout a user. The datain theprofilecanbeexplicitlygivenbytheuserorderivedbythesystem • User Model: containsthedefinitions & rulesfortheinterpretationofobservationsabouttheuserandaboutthetranslationofthatinterpretationintothecharacteristics in a userprofile • usermodelistherecipeforobtainingandinterpretinguserprofiles • User Modeling: theprocessofrepresentingtheuser
Knowingtheuser - thisknowledge - canbeappliedtoadapta systemorinterfacetotheusertoimprovethesystemfunctionalityanduserexperience User Adaptation
Issues in User-Adaptive Systems • Overfitting, “bubbleeffects”, lossofserendipityproblem: • systemsmayadapttoostronglytotheinterests/behavior • e.g., an adaptive radiostationmayalwaysplaythe same orverysimilarsongs • Wesearchfortherightbalancebetweennoveltyandrelevancefortheuser (Diversity!) • “Lost in Hyperspace” problem: • whenadaptingthenavigation – i.e. the links on whichuserscanclickto find/accessinformation • e.g., re-ordering/hidingofmenuitemsmayleadtoconfusion
Evaluation Strategies • User studies: Clean-roomstudy: ask/observe (selected) peoplewhetheryoudid a goodjob • Log analysis: Analyze (click) dataandinferwhetheryoudida goodjob, e.g., cross-validation by “Leave-one-out” • Evaluation ofusermodeling: • measurequalityofprofilesdirectly, e.g. measureoverlapwithexisting(true) profiles, orletpeoplejudgethequalityofthegenerateduserprofiles • measurequalityofapplicationthatexploitstheuserprofile, e.g., applyusermodelingstrategies in a recommendersystem (not trivial toevaluaterecommenders -> nextlecturetopic, workbyDellschaft)
Possiblemetrics • The usual IR metrics: • Precision: fractionofretrieveditemsthatare relevant • Recall: fractionof relevant itemsthathavebeenretrieved • F-Measure: (harmonic) meanofprecisionandrecall • Metricsforevaluatingrecommendation (rankings): • MeanReciprocal Rank (MRR) offirst relevant item • Success@k: probabilitythat a relevant item occurswithinthe top k • Precision@k, Recall@k & F-Measure@k • Ifa truerankingisgiven: rank correlations • Metricsforevaluatingpredictionofuserpreferences: • MAE = Mean Absolute Error • True/False Positives/Negatives
ExampleEvaluation on Flickr [Rae et al.] shows a typicalexampleofhowtoinvestigateandevaluatea proposalforimproving (tag) recommendations (usingsocialnetworks) • Task: testhowwellthe different strategies (here different tag contexts) canbeusedfor tag prediction/recommendation • Giventwo tags usedalreadyfor a photopredictfivemore tags Steps:... [Rae et al. Improving Tag RecommendationsUsingSocial Networks, RIAO’10]
ExampleEvaluation usingFlickr [Rae et al.] shows a typicalexampleofhowtoinvestigateandevaluatea proposalforimproving (tag) recommendations (usingsocialnetworks) • Task: testhowwellthedifferent strategies (heredifferent tag contexts) canbeusedfor tag prediction/recommendation • PC: Personal context • SCC: socialcontactcontext • SGC: socialgroupcontext • CC: collective/global context
ExampleEvaluation • Task: testhowwellthe different strategies (here different tag contexts) canbeusedfor tag prediction/recommendation Steps: 1. Gather a datasetof tag datapartofwhichcanbeusedasinputandaimtotesttherecommendation on theremaining tag data 2. Usetheinputdataandcalculateforthe different strategiesthepredictions 3. Measuretheperformanceusingstandard (IR) metrics: Precision ofthe • top 5 recommended tags (P@5), MeanReciprocal Rank (MRR), Mean • Average Precision (MAP) 4. Test theresultsforstatisticalsignificanceusingStudent’s T-test, relative tothebaseline(e.g. existingapproach, competitiveapproach)
ExampleEvaluation - 2 [Guy et al.] showsanotherexampleof a similarevaluationapproach Here, the different strategiesdiffer in thewaypeopleandtags areused in thestrategies: withthese tag-basedsystems, therearecomplexrelationshipsbetweenusers, tags anditems, andstrategiesaimto find the relevant aspectsoftheserelationshipsformodelingandrecommendation Here, theirbaselineisthestrategyofthe ‘mostpopular’ tags: thisis a strategyoftenused, tocomparethegloballymostpopulartags tothe tags predictedby a particularpersonalizationstrategy, thusinvestigatingwhetherthepersonalizationisworththeeffortandisabletooutperformtheeasilyavailablebaseline. [Guy et al. Social Media Recommendationbased on People and Tags, SIGIR’10]
Collaborative Filtering • Typicalassumption: • Itistoodifficulttorepresentcontentandyourcontentpreferences • Or, do youknowthedifferencebetween • White metal • Black metal • Thrash metal, speedmetal • Death metal • Power metal • Doomandgothicmetal • .... ? èDon‘teventry
Representingcontent in collaborativefiltering • An objectisrepresentedbywholikesithowmuch • PulpFictionT=(null,5,1,null,null,2,5,....) • Cold Start Problem: New movie • OblivionT=(null,null,null,....) • Noonehasratedityetbecauseit will onlybereleased in 2013 First personhas not ratedit secondpersonlikesit Third persondislikesit Oneentryforeachofthe 1 billionusers
Representingusers in collaborativefiltering • A userisrepresentedbywhat he likes • JohnSmithT=(null,5,1,null,null,2,5,....) • Cold Start Problem: New user • SteffenStaabT=(null,null,null,....) • I have not ratedanymovieyet Has not ratedPulpFiction Likesskyfall Dislikesantichrist Oneentryforeachofthe 1 million (?) movies
Collaborative Filtering • Memory-based: User-Item matrix: ratings/preferencesofusers => computesimilaritybetweenusers & recommenditemsofsimilarusers • Model-based: Item-Item matrix: similarity (e.g. based on userratings) betweenitems=> recommenditemsthataresimilartotheonestheuserlikes • Model-based: Clustering: clusterusersaccordingtotheirpreferences => recommenditemsofusersthatbelongtothe same cluster • Model-based: Bayesiannetworks: P(ulikes item B | ulikes item A) = howlikelyisitthat a user, wholikes item A, will like item B learnprobabilitiesfromuserratings/preferences • Others: rule-based, otherdataminingtechniques
Socialnetworks & interestsimilarity • Limitationsofcollaborativefiltering: • ‘coldstart’ and • ‘sparsity’ • thelack ofcontrol(overpeoplewhosharesome, but not all ofmyinterests) is also a problem, i.e. cannotadd ‘trusted’ people, norexclude ‘strange’ ones • ‘socialrecommenders’: presenceofsocialconnectionsdefinesthesimilarity in interests (e.g. socialtaggingCiteULike): • Rationale: homophily = birdsof a featherflocktogether • doesa socialconnectionindicateuserinterestsimilarity? • howmuchusersinterestsimilaritydepends on thestrengthoftheirconnection? • isitfeasibletouse a socialnetworkas a personalizedrecommendation? [Lin & Brusilovsky, Social Networks and Interest Similarity: The Case ofCiteULike, HT’10]
Conclusions • pairsunilaterallyconnectedhavemorecommoninformationitems, metadata, and tags than non-connectedpairs. • thesimilarity was largestfordirectconnectionsand • decreasedwiththeincreaseofdistancebetweenusers in thesocialnetworks • usersinvolved in a reciprocalrelationshipexhibitedsignificantlylarger similaritythanusers in a unidirectionalrelationship • traditional item-level similaritymaybelessreliablewayto find similarusersin socialbookmarkingsystems • itemscollectionsofpeersconnectedbyself-definedsocialconnectionscouldbe a usefulsourceforcross-recommendation