360 likes | 466 Views
Using imprecise estimates for weights Alan Jessop Durham Business School. Motivation. In a weighted value function model weights are inferred from judgements. Judgements are imprecise and so, therefore, weight estimates must be imprecise.
E N D
Using imprecise estimates for weights Alan Jessop Durham Business School
Motivation In a weighted value function model weights are inferred from judgements. Judgements are imprecise and so, therefore, weight estimates must be imprecise. Probabilistic weight estimates enable the usual inferential methods, such as confidence intervals, to be used to decide whether weights or alternatives may justifiably be differentiated.
Testing & sensitivity Single parameter. Results easily shown and understood. But partial. Multi parameter. Examine pairs (or more) to get a feel for interaction. Global (eg. Monte Carlo). Comprehensive but results may be hard to show simply. Using some familiar methods uncertainty can be inferred from judgements and the effects of global imprecision can be shown. An analytical approach rather than a simulation.
Sources of imprecision Statements made by the judge are inexact. This is imprecise articulation: variance = σa² The same judgements may be made in different circumstances, using different methods or at different times, for instance. This is circumstantial imprecision: variance = σc²
Sources of imprecision No redundancy e.g. simple rating Redundancy e.g. ask at different times reciprocal matrix related?
3 point estimate: a2 Beta distribution μ = aM + (1-a)(L+H)/2 σa = b(H-L) Previous studies for PERT analyses. Generalise as a = 1.800x10 -12 c5.751 b = 1.066 - 0.00853c But because w = 1 variances will be inconsistent. Solution: fit a Dirichlet distribution.
Dirichlet f(W) = kiwiui-1 ; 0<wi<1, iwi = 1, ui0, i where k= (iui) / i(ui) which has Beta marginal distributions with meani = ui / v variancei² = ui(v-ui) / v²(v+1) = i(1-i) / (v+1) and covarianceij = -uiuj / v²(v+1) = -ij / (v+1) ; i≠j wherev = iui Relative values of parameters ui determine means. Absolute values determine variance via their sum, v.
Usually used by specifying parameters (eg in Monte Carlo simulation) set parameters Dirichlet weight values But can also be used to ensure compatibility: judgements: marginal characteristics mean ei and variance si2 consistent variances i² Dirichlet Put i = ei Then get least squares best fit to minimise S = (i² - si2)2 S/v = 0 → v+1 = [ei(1-ei )]2 / ei(1-ei )si2 so i² = ei(1-ei) / (v+1) Sum over available estimates si² so can tolerate missing values ( NOTE: only have to know mean values and v )
Experiment: FT full-time MBA ranking 7 variables used in experiment
Experiment:3 point estimate → a2 missingvalue tolerated Dirichlet consistent 3 point judgement scaled from which, mean and standard deviation
Summarising discrimination between programmes y = iwixi var(y) = ijσijxixj = [ iwi(1-wi)xi² - 2ijwiwjxixj ] / (v+1) j>i For two alternatives replace x values with differences (xa-xb)
Northern Europe: UK, France, Belgium, Netherlands, Ireland
Summarising discrimination between programmes Summaries (v+1) = 351.77 Proportion of all pairwise differences significantly different at p = 0.1: discrimination = 81%
a2 and c2. Reciprocal matrix. Give each judgement in a reciprocal matrix as a 3-point evaluation. Then treat each column as a separate 3-point evaluation and find Dirichlet compatible a2 as before. For each weight the mean of these variances is the value of a2 as in aggregating expert judgements (Clemen & Winkler, 2007). The mean of the column means is the weight and the variance of the means is c2.
Results from 10 MBA students standard deviations σ = [a2 + c2]½
Are the two sources of uncertainty related? consistently σc > σa mean r = 0.70 taken together r = 0.33
Lines show indistinguishable pairs; p = 50% decide that 1 & 5 can be distinguished
A possible form of interaction Assume that new discrimination is due to increased precision rather than difference in scores: statistical significance rather than material significance. So, change precision by changing (v+1) and leave weights unaltered. z is directly proportional to √(v+1). In this case (v+1) = 8.54 → z1,5= 0.55 p = 50% → z* = 0.67 (v+1)new = (z*/z1,5)² × (v+1) = (0.67 / 0.55)² × 8.54 = 12.67 and so ...
Group aggregation Do this for all ten assessors.
Tentative conclusions Even though results are imprecise there may still exist enough discrimination to be useful, as in forming a short list. May give ordering of clusters. Makes explicit what may justifiably be discriminated. Choosing confidence levels and significance values is, as ever, sensible but arbitrary. Explore different values. Once a short list is identified, further analysis needed, probably using some form of what-if interaction to see the effect of greater precision. Variation between circumstances seems to be consistently greater than self-assessed uncertainty. Does this matter? Do we want to justify one decision now or address circumstantial (temporal?) variation?