110 likes | 125 Views
This study develops a model for two-sided markets, focusing on specialized search engines. The modeling framework allows for simulations to examine the effects of platform changes on consumer behavior, advertiser strategies, and revenue. The paper discusses four simulations, including bidding by customer segment, providing advertisers with more information, alternative web designs, and alternative auction mechanisms. The results provide insights into optimal platform strategies and potential areas for further exploration.
E N D
Comments on Yao & Mela By Avi Goldfarb, University of Toronto NET Institute, May 8 2009
Overview • They develop a model of a two-sided market where consumers and advertisers participate at a particular shopbot/ search engine • Modeling highlights • Endogenize consumer choice to use search tools like sorting and filtering results • Estimate latent segments of customer preferences • Allow advertisers to anticipate effects of today’s bids on tomorrow’s ranking • They estimate this model using data from what used to be called a “shopbot” or “comparison shopping site.” • After the success of Google, they have started to call themselves “specialized search engines”
What is their modeling for? Simulations • The estimates allow them to examine what happens when the platform changes policies about how it operates • I’d like the simulations to be more prominent and detailed • I’d also like to know more about where the results come from • The modeling framework is rich and described in detail. • It is hard to get a good sense of the data. • Descriptive statistics are sparse • The consumer-level log files are especially opaque • Before showing segment-by-segment results, aggregate results would be helpful • In what follows, I will examine each of their four simulations and assess where the data come in and what more could be done.
1. Bidding by customer segment • Should advertisers be able to bid by customer type rather than just by keyword? • Conflict: better targeting vs. too little bidding competition • This is the strongest of the simulations • Clearly requires both model and data • It is clear why all the various machinery is needed: • Latent class segmentation to enable targeting • Endogenous search tools to understand segment differences in advertising responsiveness • Dynamics to develop strategic targeting by segment • Data allows model calibration and identification of the segments • I’d like to see this explored more deeply. What is the optimal level of segmentation?
2. Providing advertisers with more information • Would the platform gain by giving advertisers more information about consumer clicks? • This is related to a general trend to increase the amount of information in the hands of advertisers (e.g. Google Analytics) • Intuitively, it seems like more information should help • Their result instead suggests that aggregated information is enough for advertisers to make good decisions • The key assumption is that advertisers cannot leverage consumer demographic information and that bidding is at the keyword/product level not the segment level • In order to answer this question, the bidding model is needed but this is a relatively data-driven result: • Observed bids appear to be efficient irrespective of customer heterogeneity
3. Alternative web designs (sorting/filtering) • Key assumption: • a well-specified outside option • If marginal benefit of the outside option is too high, then overestimate effect • If marginal benefit of the outside option is too low (more likely I think), then underestimate effect • They use “visit the website but do not search in the category of interest” as the outside option but • The real outside option is “want to find something in the category of interest but choose an alternative route” • Given data constraints, that might not be feasible. • But then it is hard to say something about the costs of eliminating the sorting/filtering functions
4. Alternative auction mechanisms • Simulate a second-price auction and find revenue equivalence • This is a direct consequence of the modeling assumptions. • It does not seem to depend on the estimation or data since the authors assume rational bidding. • I’m not sure how the data help identify this result and therefore I question the value of the simulation
Summary • Interesting, rich, ambitious paper • Thorough and well-considered model • I think even more can be done to leverage the model to understand optimal platform strategies • Refocus and extend the simulations
Minor Comments • Slot rank enters linearly. Why? • Seems like the difference between 5 and 6 is interesting and should be explored more • Organics matter • “featured store” is somewhat ambiguous; unlike the term “sponsored link” • A 35% markup is big (p. 33) • How big is this search engine/shopbot—how many customers? • Shopbot literature • Issue with extra variables in stage 1 not in the state space. • Descriptives on log files needed