250 likes | 258 Views
This presentation discusses the uncertainty surrounding modeled loss estimates, including models, confidence bands, data issues, inputs, and company approaches. It also explores the role of judgment in determining loss estimates.
E N D
UNCERTAINTY AROUND MODELED LOSS ESTIMATES CAS Annual Meeting New Orleans, LA November 10, 2003 Jonathan Hayes, ACAS, MAAA
Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions
Florida Hurricane Amounts in Millions USD
Florida Hurricane Amounts in Millions USD
Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions
Types Of Uncertainty(In Frequency & Severity) • Uncertainty (not randomness) • Sampling Error • 100 years for hurricane • Specification Error • FCHLPM sample dataset (1996) 1 in 100 OEP of 31m, 38m, 40m & 57m w/ 4 models • Non-sampling Error • El Nino Southern Oscillation • Knowledge Uncertainty • Time dependence, cascading, aseismic shift, poisson/negative binomial • Approximation Error • Res Re cat bond: 90% confidence interval, process risk only, of +/- 20%, per modeling firm Source: Major, Op. Cit..
Frequency-Severity UncertaintyFrequency Uncertainty (Miller) • Frequency Uncertainty • Historical set: 96 years, 207 hurricanes • Sample mean is 2.16 • What is range for true mean? • Bootstrap method • New 96-yr sample sets: Each sample set is 96 draws, with replacement, from original • Review Results
Frequency Bootstrapping • Run 500 resamplings and graph relative to theoretical t-distribution Source: Miller, Op. Cit.
Frequency Uncertainty Stats • Standard error (SE) of the mean: • 0.159 historical SE • 0.150 theoretical SE, assuming Poisson, i.e., (lambda/n)^0.5
Hurricane Freq. UncertaintyBack of the Envelope • Frequency Uncertainty Only • 96 Years, 207 Events, 3100 coast miles • 200 mile hurricane damage diameter • 0.139 is avg annl # storms to site • SE = 0.038, assuming Poisson frequency • 90% CI is loss +/- 45% • i.e., (1.645 * 0.038) / 0.139
Frequency-Severity UncertaintySeverity Uncertainty (Miller) • Parametric bootstrap • Cat model severity for some portfolio • Fit cat model severity to parametric model • Perform X draws of Y severities, where X is number of frequency resamplings and Y is number of historical hurricanes in set • Parameterize the new sampled severities • Compound with frequency uncertainty • Review confidence bands
OEP Confidence Bands Source: Miller, Op. Cit.
OEP Confidence Bands • At 80-1,000 year return, range fixes to 50% to 250% of best estimate OEP • Confidence band grow exponentially at frequent OEP points because expected loss goes to zero • Notes • Assumed stationary climate • Severity parameterization may introduce error • Modelers’ “secondary uncertainty” may overlap here, thus reducing range • Modelers’ severity distributions based on more than just historical data set
Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions
Data Collection/Inputs • Is this all the subject data? • All/coastal states • Inland Marine, Builders Risk, APD, Dwelling Fire • Manual policies • General level of detail • County/zip/street • Aggregated data • Is this all the needed policy detail? • Building location/billing location • Multi-location policies/bulk data • Statistical Record vs. policy systems • Coding of endorsements • Sublimits, wind exclusions, IM • Replacement cost vs. limit
More Data Issues • Deductible issues • Inuring/facultative reinsurance • Extrapolations & defaults • Blanket policies • HPR • Excess policies
Model Output • Data Imported/Not Imported • Geocoded/Not Geocoded • Version • Perils Run • Demand Surge • Storm Surge • Fire Following • Defaults • Construction Mappings • Secondary Characteristics • Secondary Uncertainty • Deductibles
Agenda • Models • Model Results • Confidence Bands • Data • Issues with Data • Issues with Inputs • Model Outputs • Company Approaches • Role of Judgment • Conclusions
Company ApproachesAvailable Choices • Output From: • 2-5 Vendor Models • Detailed & Aggregate Models • ECRA Factors • Experience, Parameterized • Select (weighted) Average
Company ApproachesLoss Costs • Arithmetic average • Subject to change • Significant u/w flexibility • Weighted average • Weights by region, peril, class et al. • Weights determined by: • Model review • Consultation with modeling firms • Historical event analysis • Judgment • Weight changes require formal sign-off
Conclusions • Cat Model Distributions Vary • More than one point estimate useful • Point estimates may not be significantly different • Uncertainty not insignificant but not insurmountable • What about uncertainty before cat models? • Data Inputs Matter • Not mechanical process • Creating model inputs requires many decisions • User knowledge and expertise critical • Loss Cost Selection Methodology Matters • # Models used more influential than weights used • Judgment Unavoidable • Actuaries already well-versed in its use
References • Bove, Mark C. et al.., “Effect of El Nino on US Landfalling Hurricanes, Revisited,” Bulletin of the American Meteorological Society, June 1998. • Efron, Bradley and Robert Tibshirani, An Introduction to the Bootstrap, New York: Chapman & Hall, 1993. • Major, John A., “Uncertainty in Catastrophe Models,” Financing Risk and Reinsurance, International Risk Management Institute, Feb/Mar 1999. • Miller, David, “Uncertainty in Hurricane Risk Modeling and Implications for Securitization,” CAS Forum, Spring 1999. • Moore, James F., “Tail Estimation and Catastrophe Security Pricing: Cat We Tell What Target We Hit If We Are Shooting in the Dark”, Wharton Financial Institutions Center, 99-14.