280 likes | 320 Views
Solving the Puzzle: The Hybrid Reinsurance Pricing Method A Practitioner’s Guide John Buchanan - Platinum Reinsurance CAS Ratemaking Seminar – REI 3 March 8, 2007. CAS RM 2007 – The Hybrid Reinsurance Pricing Method. Agenda. Overriding Assumptions Recap Traditional Methods Experience Rating
E N D
Solving the Puzzle: The Hybrid Reinsurance Pricing MethodA Practitioner’s GuideJohn Buchanan - Platinum ReinsuranceCAS Ratemaking Seminar – REI 3March 8, 2007 CAS RM 2007 – The Hybrid Reinsurance Pricing Method
Agenda • Overriding Assumptions • Recap Traditional Methods • Experience Rating • Exposure Rating • Credibility Weighting • Hybrid: Experience / Exposure Method • Highlight differences between traditional methods • Testing Default Parameters • Advanced Topics for Solving the Puzzle
Overriding Assumptionsof Hybrid Experience / Exposure Method • With perfect modeling and data the results under the experience and exposure methods will be identical. • In practice, • if the model and parameter selections for both experience and exposure methods are proper and relevant, • then the results from these methods will be similar, • except for credibility and random variations. • Lower layer experience helps predict higher less credible layers. • Frequency is a more stable indicator than total burn estimates.
Traditional Methods • Experience • Relevant parameter defaults/overrides for: • LDFs (excess layers) • Trends (severity, frequency, exposure) • Rate changes • LOB/HzdGrp indicators • Adjust for historical changes in: • Policy limits • Exposure differences • Careful “as-if” • Exposure • Relevant parameters defaults/overrides for: • ILFs (or ELFs, PropSOLD) • Direct loss ratios (on-level) • ALAE loads • Policy profile (LOB, HzdGrp) • Limit/subLOB allocations • Adjust for expected changes in: • Rating year policy limits • Rating year exposures expected to be written
Classical Credibility Weighting • Estimate separate Experience and Exposure burns • Select credibility weights using combination of: • Formulaic Approach • Expected # of Claims / Variability • Exposure ROL (or burn on line) • Questionnaire Approach • Apriori Neutral vs. Experience vs. Exposure • Patrik/Mashitz paper • Judgment • Need to check that burn patterns make sense • i.e. higher layer ROL < lower ROL • similar to Miccolis ILF consistency test
Classical Credibility Weighting Credibility weights judgmentally selected
Basic Steps of The Hybrid Method Step 1: Estimate Experience burns & counts • Select base attachment points/layers above the reporting data threshold • Estimate total excess burns using projection factors • Estimate excess counts using frequency trends, claim count LDFs • Calculate implied severities Step 2: Estimate Exposure burns & counts • Use same attachment points/layers as Experience • Estimate total burns and bifurcate between counts, average severities Step 3: Calculate Experience/Exposure frequency ratio by attachment point • Estimate overall averages using number of claims/variability Step 4: Review frequency ratio patterns • Adjust experience or exposure models if needed and re-estimate burns (!!) • Select indicated experience/exposure frequency ratio(s) Step 5: Similarly review excess severities and/or excess burns Step 6: Combine Hybrid frequency/severity results • Using experience adjusted exposure frequencies and severities Step 7: Determine overall weight to give Hybrid
Estimation of Hybrid CountsRecap Steps 1 to 4 A: Select base attachment points above data threshold • Example: threshold=150k; reins layers=500x500k, 1x1mm • Select 200k, 250k, 350k, 500k, 750k, 1mm attachment points B: Calculate experience counts • At lower attachment points, year by year patterns should be variable about some mean • For example, if upward trend, then perhaps: • Overdeveloping or trending later years C: Calculate exposure counts for comparison D: Review experience/exposure frequency patterns • Should be relatively stable until credibility runs out • Double back to methods if not • Select frequency ratios to estimate Hybrid counts
Step 1a: Experience Counts and Burns Sublayer $150,000 xs 350,000
Step 1b: Review Experience CountsYear Variability:>350,000 Attachment Apparently random pattern around selection of #=12.05 Note: Claim counts are on-leveled
Step 1c: Review Experience CountsYear Variability: >1,000,000 Attachment Credibility runs out; indication is #=.36
Step 1-Recap: Estimation of Experience Burns, Counts and Implied Severities To be compared to exposure counts
Step 2: Estimation of Exposure Burns Bifurcated Between Counts and Severities 12.05 exper / 15.34 expos = 78.6%
Step 3: Calculate Experience/Exposure Frequency Ratios and Base Layer Weights 12.05 exper / 15.34 expos = 78.6%
Step 4a: Review Exper/Expos FrequenciesAttachment Point Pattern: 200k…1mm Expos and Exper count ratios relatively consistent through 350k- IF experience very credible, then perhaps pressure to reduce exposure L/R; check out spikes
Step 4-Recap: Select Exper/Expos Frequency Ratio For Hybrid Claim Count Estimate Important Selection 6.00 expos x 80.0%
Step 5: Selected Severity Unrealistic experience severity
Step 6: Selected Overall Hybrid Burn Hybrid: Experience adjusted Exposure count & severity… 100% credibility to burn??
Steps 1-7: Bringing it All Together Step 3 Step 5 Step 1 Step 4 Step 6 Step 2 Step 7?
Benefits of Hybrid Method • One of main benefits is questioning Experience and Exposure Selections • To the extent credible results don’t line up, this provides pressure to the various default parameters • For example, there would be downward pressure on default exposure ILF curves or loss ratios if • Exposure consistently higher than experience, and • Credible experience and experience rating factors • A well constructed Hybrid method can sometimes be given 100% weight if credible • Can review account by account, and aggregate across accounts to evaluate pressure on industry defaults
Test of Default Parameters • Aggregate across “similar” accounts to evaluate pressure on industry defaults • May want to re-rate accounts using e.g. default rate changes, ILFs, premium allocations, LDFs, trends, etc. • Each individual observation represents a cedant/attachment point exper/expos ratio • Review dispersion of results and overall trend • E.g. if weighted and/or fitted exper/expos ratios are well below 100% (or e.g. 90% if give some underwriter credit) then perhaps default L/Rs overall are too high (or conversely LDFs or trends too light) • If trend is up when going from e.g. 100k to 10mm att pt, then perhaps expos curve is predicting well at lower points but is underestimating upper points
Test of Default Parameters (cont.) • Before making overall judgments, must consider • UW contract selectivity (contracts seen vs. written), • Sample size (# of cedants/years), • Impact “as-if” data (either current or historical) • Survivor bias • Systematic bias in models • “Lucky”
Test of Default Rating Factors – Example 1 Well below 100%, pressure to reduce expos params or increase exper params…but credible??
Test of Default Rating Factors – Example 2 Exposure curve too light with higher attachment points?
Appendix - More advanced techniques for Solving the Puzzle • Inspecting Experience/Exposure differences
Appendix - More advanced techniques for Solving the Puzzle • Pressure Indicators –years (or layers) From forthcoming paper - THE HYBRID REINSURANCE PRICING METHOD: A PRACTITIONER’S GUIDE