550 likes | 717 Views
Ranking and Rating Data in Joint RP/SP Estimation. by JD Hunt, University of Calgary M Zhong, University of Calgary PROCESSUS Second International Colloquium Toronto ON, Canada June 2005. Overview. Introduction Context Motivations Definitions Revealed Preference Choice
E N D
Ranking and Rating Data in Joint RP/SP Estimation by JD Hunt, University of Calgary M Zhong, University of Calgary PROCESSUS Second International Colloquium Toronto ON, Canada June 2005
Overview • Introduction • Context • Motivations • Definitions • Revealed Preference Choice • Stated Preference Rankings • Revealed Preference Ratings • Stated Preference Ratings • Estimation Testbed • Concept • Synthetic Data Generation • Results • Conclusions
Overview • Introduction • Context • Motivations • Definitions • Revealed Preference Choice • Stated Preference Rankings • Revealed Preference Ratings • Stated Preference Ratings • Estimation Testbed • Concept • Synthetic Data Generation • Results – so far • Conclusions – so far
Introduction • Context • Common task to estimate logit model utility function for non-existing mode alternatives • Joint RP/SP estimation available • Good for sensitivity coefficients • Problems with alternative specific constants (ASC) • Motivation • Improve situation regarding ASC • Seeking to expand on joint RP/SP estimation • Add rating information • 0 to 10 scores • Direct utility • Increase understanding of issues regarding ASC generally
Definitions • Revealed Preference Choice • Stated Preference Ranking • Revealed Preference Ratings • Stated Preference Ratings • Linear-in-parameters logit utility function Um = Σk αm,k xm,k + βm
Definitions • Revealed Preference Choice • Stated Preference Ranking • Revealed Preference Ratings • Stated Preference Ratings • Linear-in-parameters logit utility function Um = Σk αm,k xm,k + βm sensitivity coefficient ASC
Revealed Preference Choice • Actual behaviour • Best alternative choice from existing • Attribute values determined separately • Indirect utility measure – observe outcome Umr = λr [ Σk αm,k xm,k + βm ] + βmr
Revealed Preference Choice • Disaggregate estimation provides Umr = Σk α’m,kr xm,k + β’mr with α’m,kr = λr αm,k β’mr = λr βm + βmr
Stated Preference Ranking • Stated behaviour • Ranking alternatives from presented set • Attribute values indicated • Indirect utility measure – observe outcome Ums = λs [ Σk αm,k xm,k + βm ] + βms
Stated Preference Ranking • Disaggregate (exploded) estimation provides Ums = Σk α’m,ks xm,k + β’ms with α’m,ks = λs αm,k β’ms = λs βm + βms
Revealed Preference Ratings • Stated values for selected and perhaps also unselected alternatives • Providing 0 to 10 score with associated descriptors 10 = excellent; 5 = reasonable; 0 = terrible • Attribute values determined separately • Direct utility measure (scaled?) Rmg = θg [ Σk αm,k xm,k + βm ] + βmg
Revealed Preference Ratings • Regression estimation provides Rmg = Σk α’m,kg xm,k + β’mg with α’m,kg = θgαm,k β’mg = θgβm + βmg
Stated Preference Ratings • Stated values for each of set of alternatives • Providing 0 to 10 score with associated descriptors 10 = excellent; 5 = reasonable; 0 = terrible • Attribute values indicated • Provides verification of rankings • Direct utility measure (scaled?) Rmh = θh [ Σk αm,k xm,k + βm ] + βmh
Stated Preference Ratings • Regression estimation provides Rmh = Σk α’m,kh xm,k + β’mh with α’m,kh = θhαm,k β’mh = θhβm + βmh
Estimation Testbed • Specify true parameter values (αm,kand βm) • Generate synthetic observations • Assume attribute values and error distributions • Sample to get specific error values • Calculate utility values using attribute values, true parameter values and error values • Develop RP choice observations and SP ranking observations using utility values • Develop RP ratings observations and SP ratings observations by scaling utility values to fit within 0 to 10 range • Test estimation techniques in terms of returning to true parameter values
True Utility Function Um = Σk αm,k xm,k + βm + em
Attribute Values sampled from N(μm,k ,σm,k) with
Error Values • Sampled from N(μ= 0 , σm ) • σm varies by observation type: • RP Choice: σm = σrm = 2.4 • SP Rankings: σm = σsm = 1.5 • RP Ratings: σm = σgm = 2.1 • SP Ratings: σm = σhm = 1.8
Generated Synthetic Samples • Each of 4 observation types • 7 alternatives for each observation (m=7) • Set of 15,000 observations • Sometimes considered subsets of alternatives with overall across observation types, as indicated below
Testbed Estimations • RP Choice • SP Rankings • Joint RP/SP Data • Ratings • Combined RP/SP Data and Ratings
RP Choice • Used ALOGIT software • Set β’m=1r = 0 to avoid over-specification • Provides: • α’m,kr = λr αm,k • β’mr = λr βm + βmr • Know that λr = π / ( √6 σrm ) = 0.534
ρ20= 0.1834 ρ2c= 0.6982
RP Choice Selection frequencies and ASC estimates
RP Choice 2 Selection frequencies and ASC estimates
ρ20= 0.3151 ρ2c= 0.1683
RP Choice 3 Selection frequencies and ASC estimates
ρ20= 0.1852 ρ2c= 0.1736
SP Rankings • Used ALOGIT software • Set β’m=1s = 0 to avoid over-specification • Provides: • α’m,ks = λs αm,k • β’ms = λs βm + βms • Know that λs = π / ( √6 σsm ) = 0.855
SP Rankings • More information with full ranking • Also confirm against RP above • ‘ranking version’ available • estimate using full ranking
RP Rankings Estimates vs True Values with 15,000 observations 4 3 2 1 estimated 0 -5 -4 -3 -2 -1 0 1 2 3 4 5 -1 -2 -3 -4 observed
SP vs RP Rankings • ASC translated en bloc to some extent
SP Rankings: Role of σm,k • Impact of changing σm,k used when synthesizing attribute values • Sampling from N(μm,k ,σm,k) • Different σm,k means different spreads on attribute values • Impacts relative size of σsm • Implications for SP survey design
Attribute Values sampled from N(μm,k ,σm,k) with
SP Rankings: Role of σm,k • Increasingσm,k improves estimators • Roughly proportional • Ratio of βmtoαm,kmaintained • Use 1.00 ·αm,kin remaining work here • Implications for SP survey design • More variation in attribute values is better
Joint RP/SP Data • Two basic approaches for αm,k • Sequential (Hensher) First estimate α’m,ks using SP observations; Then estimate α’m,kr using RP observations, also forcing ratios among α’m,kr to match those obtained first for α’m,ks • Simultaneous (Ben Akiva; Morikawa; Daly; Bradley) Estimate α’m,kr using RP observations and α’m,kr using SP observations and (λs/λr) altogether where (λs/λr) α’m,kr is used in place of α’m,ks • Little concensus on approach for βm
Joint RP/SP Data • Used ALOGIT software • Set β’m=1s = 0 and β’m=1r = 0 to avoid over-specification • Provides: • α’m,ks = λs αm,k α’m,kr = λr αm,k • β’ms = λs βm +βms β’mr = λr βm +βmr • λr/λs • Know that λr = 0.855 and λs = 1.166
Joint RP/SP Ranking Estimation for Full set of RP and SP 15,000 Observations (7 Alternatives for each) 3 2 1 0 estimated -3 -2 -1 0 1 2 3 -1 -2 -3 observed
Joint RP/SP Ranking Estimation with 15,000 RP Observations for Alternative 1-4 and 15,000 SP Observations for Alternatives 4-7 3 2 1 0 Estimated -3 -2 -1 0 1 2 3 -1 -2 -3 Observed
RP Ratings • Two potential interpretations of ratings • Value provided is a (scaled?) direct utility • Value provided is 10x probability of selection • Issue of reference • ‘excellent’ in terms of other people’s travel • ‘excellent’ relative to other alternatives for respondent specifically • Related to interpretation above • Here: Use direct utility interpretation and thus reference is in terms of other people’s travel
RP Ratings • Used MINITAB MLE Provides: • α’m,kg = θgαm,k • β’mg = θgβm + βmg
Estimation of PlottedRP Ratings Values • θgis found by minimizing the minimum square error between estimated sensitivities (θgαm,k ) and the true values αm,k • The estimated values for βmare then found using (β’mg - βmg min)/θ g with the above-determined value for θg
SP Ratings • Used MINITAB • Provides: • α’m,kh = θhαm,k • β’mh = θhβm + βmh