170 likes | 294 Views
Catastrophe Pricing: The Finer Points. Sean Devlin CARe Meeting June 6-7, 2005. Agenda. Vendor Modeling Process Evaluating Inputs Unmodeled Perils Evaluating Outputs Conversion of Loss Cost to Pricing. Vendor Models –What to Use?. Major modeling firms AIR EQE RMS
E N D
Catastrophe Pricing:The Finer Points Sean Devlin CARe Meeting June 6-7, 2005
Agenda • Vendor Modeling Process • Evaluating Inputs • Unmodeled Perils • Evaluating Outputs • Conversion of Loss Cost to Pricing
Vendor Models –What to Use? • Major modeling firms • AIR • EQE • RMS • Other models, including proprietary • Options in using the models • Use one model exclusively • Use one model by “territory” • Use multiple models for each account
Vendor Models –What to Use? (Cont’d) Use One Model Exclusively • Benefits • Simplify process for each deal • Consistency of rating • Lower cost of license • Accumulation easier • Running one model for each deal involves less time • Drawbacks • Can’t see differences by deal and in general • Conversion of data to your model format
Vendor Models –What to Use? (Cont’d) Use One Model By “Territory” • Detailed review of each model by “territory” • Territory examples (EU wind, CA EQ, FL wind) • Select adjustment factors for the chosen model • Benefits • Simplify process for each deal • Consistency of rating • Accumulation easier • Running one model involves less time • Drawbacks • Can’t see differences by deal • Conversion of data to your model format
Vendor Models –What to Use? (Cont’d) Use Multiple Models • Benefits • Can see differences by deal and in general • Drawbacks • Consistency of rating? • Conversion of data to each model format • Simplify process for each deal • High cost of licenses • Accumulation difficult • Running one model for each deal is time consuming
Model Inputs • Garbage In => Garbage Out • TIV checks/ aggregates • “As-if” past events • Scope of data (e.g. RMS – WS, EQ, TO datasets) • Which “territory” modeled and not modeled • Type of country considered for exposures abroad • Clash between separate zones (US – Caribbean) • Tier I – well established models – US, EU, etc. • Tier II – modeled, but less reliable – SA, Caribbean • Tier III – not modeled
“Unmodeled” Perils Winter storm • Not insignificant peril in some areas, esp. low layers • 1993: 1.75B – 14th largest • 1994: 100M, 175M, 800M, 105M • 1996: 600M, 110M, 90M, 395M • 2003: 1.6B • # of occurrences in a cluster????? • Possible Understatement of PCS data • Methodology • Degree considered in models • Evaluate past event return period(s) • Adjust loss for today’s exposure • Fit curve to events
“Unmodeled” Perils (cont’d) Flood • Less frequent • Development of land should increase frequency • Methodology • Degree considered in models • Evaluate past event return period(s),if possible • No loss history – not necessarily no exposure Terrorism • Modeled by vendor model? Scope? • Adjustments needed • Take-up rate – current/future • Future of TRIA – exposure in 2006 • Other – depends on data
“Unmodeled” Perils (cont’d) Wildfire • Not just CA • Oakland Fires: 1.7B – 15th largest • Development of land should increase freq/severity • Two main loss drivers • Brush clearance – mandated by code • Roof type (wood shake vs. tiled) • Methodology • Degree considered in models • Evaluate past event return period(s), if possible • Risk management, esp. changes • No loss history – not necessarily no exposure
“Unmodeled” Perils (cont’d) Fire Following • No EQ coverage = No loss potential? NO!!!!! • Model reflective of FF exposure on EQ policies? • Severity adjustment of event needed, if • Some policies are EQ, some are FF only • Only EQ was modeled • Methodology • Degree considered in models • Compare to peer companies for FF only • Default Loadings for unmodeled FF • Multiplicative Loadings on EQ runs
“Unmodeled” Perils (cont’d) Extratropical wind • National writers tend not to include TO exposures • Models are improving, but not quite there yet • Significant exposure • Frequency: TX • Severity: May 2003 event of 10B – 9th largest • Methodology • Experience and exposure Rate • Compare to peer companies with more data • Compare experience data to ISO wind history • Weight methods
“Unmodeled” Perils (cont’d) No Data • Typically for per risk contracts without detailed data • Typically not a loss driver on per risk treaties • However, exceptions exist • Methodology • Experience and exposure Rate • Compare to peer companies with modeling • Develop default loads by layer/location
“Unmodeled” Perils (cont’d) Other Perils • Expected the unexpected – Dave Spiegler article • Examples: Blackout caused unexpected losses • Methodology • Blanket load • Exclusions, Named Perils in contract • Develop default loads/methodology for an complete list of perils
Using the Output Don’t Trust the Black Box • Data, Data, Data • Contract Match: • Definition of risk • Definition occurrence • Dual trigger contracts • Scope of coverage • Modeling of past exposures • Need to convert to prospective period • TIV inflation • Change in exposures • Know what assumptions were used by modeler
Loadings to final EL Considerations in final indicated “price” • % of loss? • % of s? • Combination of above? • Target LR, TR, CR? • Reflect red zone capacity constraints? • “Unused” capacity loads • EL for Layer 100M x 100M is 5M • EL for Layer 200M x 100M is 5.1M • Loading for 100M x 200M??????
Summary • Determine process and models to use • Know what was modeled • Perform reasonability checks • Understand strength and weakness of the models • Add in the “unmodeled” exposure • Make other adjustments to reflect ongoing terms and exposure Don’t Trust the Black Box