180 likes | 290 Views
ValuePick : Towards a Value-Oriented Dual-Goal Recommender System. Leman Akoglu Christos Faloutsos. OEDM in conjunction with ICDM 2010 Sydney, Australia. Recommender Systems. Traditional recommender systems try to achieve high user satisfaction.
E N D
ValuePick: Towards a Value-Oriented Dual-Goal Recommender System Leman Akoglu Christos Faloutsos OEDM in conjunction with ICDM 2010 Sydney, Australia
Recommender Systems Traditional recommender systems try to achieve high user satisfaction
Dual-goal Recommender Systems -“value” Trade-off user satisfaction vs. vendor profit Dual-goal recommender systems try to achieve (1) high user satisfaction as well as (2) high-“value” vendor gain
Dual-goal Recommender Systems vertices ranked by proximity v253 v162 v261 v327 . . . query vertex network-“value”
Dual-goal Recommender Systems vertices ranked by proximity v253 v162 v261 v327 . . . network-“value”
network-“value” Dual-goal Recommender Systems vertices ranked by proximity v253 v162 v261 Trade-off user satisfaction vs. network connectivity v327 . . . network-“value”
Dual-goal Recommender Systems • Main concerns: • We cannot make the highest value recommendations • Recommendations should still reflect users’ likes relatively well Vendor User
ValuePick: Main idea • Carefullyperturb (change the order of) the proximity-ranked list of recommendations • Controlled by a tolerance for each user ζ ζ Vendor User
ValuePick Optimization Framework DETAILS proximity “value” Total expected gain (assuming proximity ~ acceptance prob.) tolerance ϵ [0,1] average proximity score of original top-k
ValuePick ~ 0-1 Knapsack DETAILS value We use CPLEX to solve our integer programming optimization problem maximum weight W allowed weight of item i
Pros and Cons of ValuePick Cons:In marketing, it is hard to predict the effect of an intervention in the marketing scheme, i.e., not clear how users will respond to ‘adjustments’ Pros: • Tolerance ζ can flexibly (and even dynamically) control the `level-of-adjustment’ • Users rate same item differently at different times, i.e., users have natural variability in their decisions.
Experimental Setup I • Two real networks • Netscience – collaboration network • DBLP – co-authorship network • Four recommendation schemes: • No Gain Optimization (ζ = 0) • ValuePick (ζ = 0.01, ζ = 0.02) • Max Gain Optimization (ζ = 1) • Random • “value” is centrality
Experimental Setup II Simulation steps: Given a recommendation scheme s • At each step T • For each node i • Make a set K of recommendations to node i using s • Node ilinks to node jϵK with prob. proximity(i,j) • Re-compute proximity and centrality scores We use k=5 and T=30
Comparison of schemes EXPERIMENTS ValuePick provides a balance between user satisfaction (high E), and vendor gain (small diameter).
Recommend by heuristic EXPERIMENTS Simple perturbation heuristics do not balance user satisfaction and vendor gain properly.
Computational complexity EXPERIMENTS Making k ValuePick recommendations to a given node involves: 1 - finding PPR scores O(#edges) 2 - solving ValuePick optimization w/ CPLEX 1/10 sec. to solve among top 1K nodes
Conclusions • Problem formulation: incorporate the “value” of recommendations into the system • Design of ValuePick: • parsimonious single parameter ζ • flexible adjust ζ for each user dynamically • general use any “value” metric • Performance study: • experiments to show proper trade of user acceptance in exchange for higher gain • CPLEX with fast solutions
THANK YOU www.cs.cmu.edu/~lakoglu lakoglu@cs.cmu.edu ζ Vendor User