940 likes | 1.09k Views
Social Learning and Consumer Demand. Markus Mobius (Harvard University and NBER) Paul Niehaus (Harvard University) Tanya Rosenblat (Wesleyan University and IAS) 28 April, 2006. Introduction. We “seed” a known social network with information by
E N D
Social Learning and Consumer Demand Markus Mobius (Harvard University and NBER) Paul Niehaus (Harvard University) Tanya Rosenblat (Wesleyan University and IAS) 28 April, 2006
Introduction We “seed” a known social network with information by distributing new products randomly to some members. Methodology: How can we measure the influence of treated agents on their friends?
Introduction We “seed” a known social network with information by distributing new products randomly to some members. Methodology: How can we measure the influence of treated agents on their friends? Results: How does social influence decline with distance?
Methodology • We build a simple model to infer the “interaction probability” between a treated agent and any of her social neighbors. • During an interaction the treated agent’s knowledge is transferred to the neighbor. • Interaction probabilities vary by social distance. • Our model has the advantage that it can be easily estimated and that it can deal with treatment “overlaps”.
Methodology Interaction probabilities are convenient to measure influence. Example: Assume that an agent has 10 direct friends and 60 indirect friends and the interaction probabilities are and . Then on average the agent transfers knowledge to 1 direct friends and 3 indirect friends. In this example the agent affects knowledge in the network mainly by influencing indirect friends rather than direct friends because the interaction probability decreases less strongly than the network grows.
Basic Design: Stage 1: Measure Social Network Stage 2: Baseline Survey Stage 3: Distribute Products Stage 4: Track Social Learning
Measuring the Network • Rather than surveys, agents play in a trivia game • Leveraged popularity of www.thefacebook.com • Membership rate at Harvard College over 90% * • 95% weekly return rate * * Data provided by the founders of thefacebook.com
Markus • His Profile • (Ad Space) • His Friends
Trivia Game: Recruitment • On login, each Harvard undergraduate member of thefacebook.com saw an invitation to play in the trivia game. • Subjects agree to an informed consent form – now we can email them! • Subjects list 10 friends about whom they want to answer trivia questions. • This list of 10 people is what we’re interested in (not their performance in the trivia game)
Trivia Game: Trivia Questions • Subjects list 10 friends – this creates 10*N possible pairings. • Every night, new pairs are randomly selected by the computer • Example: Suppose Markus listed Tanya as one of his 10 friends, and that this pairing gets picked.
Trivia Game Example • Tanya (subject) gets an email asking her to log in and answer a question about herself • Tanya logs in and answers, “which of the following kinds of music do you prefer?”
Trivia Game Example (cont.) • Once Tanya has answered, Markus gets an email inviting him to log in and answer a question about one of his friends. • After logging in, Markus has 20 seconds to answer “which of the following kinds of music does Tanya prefer?”
Trivia Game Example (cont.) • If Markus’ answer is correct, he and Tanya are entered together into a nightly drawing to win a prize.
Trivia Game: Summary • Subjects have incentives to list the 10 people they are most likely to be able to answer trivia questions about. • This is our (implicit) definition of a “friend” • Answers to trivia questions are unimportant • ok if people game the answers as long as the people it’s easiest to game with are the same as those they know best. • Roommates were disallowed • 20 second time limit to answer • On average subjects got 50% of 4/5 answer multiple choice questions right – and many were easy
Recruitment • In addition to invitations on login, • Posters in all hallways • Workers in dining halls with laptops to step through signup • Personalized snail mail to all upper-class students • Article in The Crimson on first grand prize winner • Average acquisition cost per subject ~= $2.50
Participation • Consent: 2932 out of 6389 undergrads (46%), and 50% of upperclassmen • 10 friends: 2360 undergraduates (37%) • Participation by year of graduation:
Participation • By residential house (upperclassmen)
Network Data • 23,600 links from participants • 12,782 links between participants • 6,880 of these symmetric (3,440 coordinated friendships) • Similar to 2003 results • Construct the network using “or” link definition • 5576 out of 6389 undergraduates (87%) participated or were named • One giant cluster • Average path length between participants = 4.2 • Cluster coefficient for participants = 17% • Lower than 2003 results – because many named friends are in different houses
Methods in Comparison • 2003 House Experiment in 2 undergraduate houses • Email-data: Sacerdote and Marmaris (2004) • Mutual-friend methods with facebook data? (Glaeser, Laibson, Sacerdote 2000)
Goals of Baseline • We want to predict valuations of subjects for our products without telling them which products we will distribute. • This allows us to test whether subjects with a higher valuations are more influenced. • We treat a product as a vector of attributes which span a space containing the specific product.
Choice of Products • We want new products to maximize the potential for social learning.
Choice of Products • We want new products to maximize the potential for social learning. • We want some products where subjects have to talk to exchange information (such as newspaper subscription) and some products whose use is conspicuous (such as cell phone).
“Public Products” T-Mobile Sidekick II Philips Key019 Digital Camcorder Philips ShoqBox
“Private Products” Student Advantage Discount Card (1 year) Baptiste Studios Yoga Vouchers (5) Qdoba Meal Vouchers (5)
Configurators • We identified 5 or 6 salient features for each of the six products. • For example, a product might be a general type of discount card for students. • Particular features of the card could be: (i) provides a discount on textbooks; (ii) provides a discount on Amtrak/ Greyhound; etc. • We elicit a baseline valuation from subjects plus a valuation for each feature (assumes additive separability of valuations over features).
Feature descriptions Feature bids Baseline bid
Constructed Bids • We constructed an implicit bid B from subjects responses: Bid=Baseline Value + Sum over Feature Values (for existing features) • Subjects were told that they could submit a second in the followup survey and that either this bid or the followup bid would be entered with equal probability into a uniform-price auction.
Constructed Bids • We constructed an implicit bid B from subjects responses: Bid=Baseline Value + Sum over Feature Values (for existing features) • Subjects were told that they could submit a second in the followup survey and that either this bid or the followup bid would be entered with equal probability into a uniform-price auction. Subjects are provided with incentives for truth-telling.
(Price) ($20) ($50) ($35) ($150) ($150) ($250)
Distributions of Imputed Bids • Imputed valuations look sensible. • In each case market prices lie between median bid and upper tail.
Randomized Product Trials • Private Products • 1 year Student Advantage cards • 5 yoga vouchers • 5 meal vouchers • Public products • Try out for approximately 4 weeks during end of term
Randomization • Only subjects with imputed bids above the median were eligible. We then offered products to about 100 subjects for each product. • Blocked by year of graduation, gender, and residential house. • Email invitations to come pick up samples • Invitation times varied to vary strength of exposure (April 26th – May 3rd)
Response Rates Overall: 57%
Info Treatments • Varied information communicated verbally by workers doing distribution • Information treatments correspond to product features in our configurators (5 or 6 features for each product). • Reinforced this information treatment with reminder emails • Each treatment given with 50% probability to each subject
Other Treatments • We also provided randomized online and print ads to subjects who did not receive products (not reported in this talk).
Followup Survey • We measure both subjective and objective knowledge of all subjects.
Followup Survey • We measure both subjective and objective knowledge of all subjects. Subjective Knowledge: Stated probability that subject can answer any Yes/No question correctly.
Followup Survey • We measure both subjective and objective knowledge of all subjects. Subjective Knowledge: Stated probability that subject can answer any Yes/No question correctly. Objective Knowledge: Average number of actual correct Yes/No questions in subsequent quiz.
Eliciting Confidence Levels • Meet “Bob the Robot” and his clones Bob 1 – Bob 100 • Subjects are randomly paired with an (unknown) Bob • Subjects indicated a “cutoff Bob” at which they are indifferent about who should answer the question • If assigned Bob is better than the cutoff, Bob answers the question; otherwise we use subject’s answer • Incentive-compatible mechanism to elicit subject’s belief that he/she will get the question right