440 likes | 620 Views
Crowdsourcing and All-Pay Auctions. Milan Vojnović Microsoft Research Joint work with Dominic DiPalantino. UC Berkeley, July 13, 2009. Examples of Crowdsourcing. Crowdsourcing = soliciting solutions via open calls to large-scale communities Coined in a Wired article (’06) Taskcn
E N D
Crowdsourcing and All-Pay Auctions Milan Vojnović Microsoft Research Joint work with Dominic DiPalantino UC Berkeley, July 13, 2009
Examples of Crowdsourcing • Crowdsourcing = soliciting solutions via open calls to large-scale communities • Coined in a Wired article (’06) • Taskcn • 530,000 solutions posted for 3,100 tasks • Innocentive • Over $3 million awarded • Odesk • Over $43 million brokered • Amazon’s Mechanical Turk • Over 23,000 tasks
Examples of Crowdsourcing (cont’d) • Yahoo! Answers • Lunched Dec ’05 • 60M users / 65M answers (as of Dec ’06) • Live QnA • Lunched Aug ’06 / closed May ’09 • 3M questions / 750M answers • Wikipedia
Incentives for Contribution • Incentives • Monetary$$$ • Non-momentarySocial gratification and publicityReputation pointsCertificates and “levels” • Incentives for both participation and quality
Incentives for Contribution (cont’d) • Ex. Taskcn Contest duration Number of submissions Number of registrants Number of views Reward range (RMB) 100 RMB $15 (July 09)
Incentives for Contribution (cont’d) • Ex. Yahoo! Answers Levels Points Source: http://en.wikipedia.org/wiki/Yahoo!_Answers
Questions of Interest • Understanding of the incentive schemes • How do contributions relate to offered rewards? • Design of contests • How do we best design contests? • How do we set rewards? • How do we best suggest contests to players and rewards to contest providers?
Strategic User Behavior • From empirical analysis of Taskcn by Yang et al (ACM EC ’08) – (i) users respond to incentives, (ii) users learn better strategies • Suggests a game-theoretic analysis User Strategies on Taskcn.com User Strategies on Taskcn.com
Outline • Model of Competing Contests • Equilibrium Analysis • Player-Specific Skills • Contest-Specific Skills • Design of Contests • Experimental Validation • Conclusion
Single Contest Competition c1 c2 R c3 c4 contest offeringreward R players ci = cost per unit effort or quality produced
Single Contest Competition (cont’d) c1 b1 c2 b2 c3 R b3 c4 b4
All-Pay Auction v1 b1 v2 b2 v3 b3 v4 b4 Everyone pays their bid
Competing Contests 1 R1 2 R2 ... ... u Rj ... ... RJ N users contests
Incomplete Information Assumption Each user u knows = total number of users = his own skill = skills are randomly drawn from F We assume F is an atomless distribution with finite support [0,m]
Assumptions on User Skill 1) Player-specific skill random i.i.d. across u(ex. contests require similar skills or skill determined by player’s opportunity cost) 2) Contest-specific skill random i.i.d. across u and j(ex. contests require diverse skills)
Bayes-Nash Equilibrium • Mixed strategy • Equilibrium Select contest of highest expected profit where expectation with respect to “beliefs” about other user skills = prob. of selecting a contest of class j = bid Contest class = set of contests that offer same reward
User Expected Profit • Expected profit for a contest of class j = prob. of selecting a contest of class j = distribution of user skillconditional on having selected contest class j
Outline • Model of Competing Contests • Equilibrium Analysis • Player-Specific Skills • Contest-Specific Skills • Design of Contests • Experimental Validation • Conclusion
Equilibrium Contest Selection m 1 1 v2 2 2 v3 3 3 v4 4 4 0 5 contestclasses skill levels
Threshold Reward • Only K highest-reward contest classes selected with strictly positive probability = number of contests of class k
Partitioning over Skill Levels • User of skill v is of skill level l if where
Contest Selection • User of skill l, i.e. with skill selects a contest of class j with probability
Participation Rates • A contest of class j selected with probability • Prior-free – independent of the distribution F
Large-System Limit • For positive constants where K is a finite number of contest classes
Skill Levels for Large System • User of skill v is of skill level l if where
Participation Rates for Large System • Expected number of participants for a contest of class j • Prior-free – independent of the distribution F
Contest Selection in Large System • User of skill l, i.e. with skill selects a contest of class j with probability m 1 1 2 2 3 3 4 4 0 5 1/3 • For large systems, what matters is which contests are selected for given skill 1/3 1/3
Proof Hint for Player-Specific Skills g1(v) • Key property – equilibrium expected payoffs as showed g2(v) g3(v) g4(v) v 0 v3 v2 v1 m
Outline • Model of Competing Contests • Equilibrium Analysis • Player-Specific Skills • Contest-Specific Skills • Design of Contests • Experimental Validation • Conclusion
Contest-specific Skills • Results established only for large-system limit • Same equilibrium relationship between participation and rewards as for player-specific skills
Proof Hints • Limit expected payoff – For each • Balancing – Whenever • Asserted relations for follow from above
Outline • Model of Competing Contests • Equilibrium Analysis • Player-Specific Skills • Contest-Specific Skills • Design of Contests • Experimental Validation • Conclusion
System Optimum Rewards • Set the rewards so as to optimize system welfare SYSTEM maximise over subject to
Example 1: zero costs(non monetary rewards) Assume are increasing strictly concave functions. Under player-specific skills, system optimum rewards: for any c > 0 where is unique solution of • Rewards unique up to a multiplicative constant – only relative setting of rewards matters
Example 1 (cont’d) • For large systems Assume are increasing strictly concave functions. Under player-specific skills, system optimum rewards: for any c > 0 where is unique solution of
Example 2: optimum effort • Consider SYSTEM with Utility: { exerted effort Cost: { { prob. contest attended cost of giving Rj (budget constraint)
Outline • Model of Competing Contests • Equilibrium Analysis • Player-Specific Skills • Contest-Specific Skills • Design of Contests • Experimental Validation • Conclusion
Taskcn • Analysis of rewards and participation across tasks as observed on Taskcn • Tasks of diverse categories: graphics, characters, miscellaneous, super challenge • We considered tasks posted in 2008
Taskcn (cont’d) reward number of views number of registrants number of submissions
Submissions vs. Reward • Diminishing increase of submissions with reward Graphics Characters Miscellaneous linear regression
Submissions vs. Rewardfor Subcategory Logos • Conditional on the rate at which users submit solutions • Conditioning on the more experienced users, the better the prediction by the model any rate once a month every fourth day every second day model
Same for the Subcategory 2-D any rate once a month every fourth day every second day model
Conclusion • Crowdsourcing as a system of competing contests • Equilibrium analysis of competing contests • Explicit relationship between rewards and participations • Prior-free • Diminishing increase of participation with reward • Suggested by the model and data • Framework for design of crowdsourcing / contests • Base results for strategic modelling • Ex. strategic contest providers
More Information • Paper: ACM EC ’09 • Version with proofs: MSR-TR-2009-09 • http://research.microsoft.com/apps/pubs/default.aspx?id=79370