1 / 15

Pricing Large Insureds Stewart Gleason Odyssey Reinsurance 1999 Seminar on Ratemaking

Pricing Large Insureds Stewart Gleason Odyssey Reinsurance 1999 Seminar on Ratemaking. It’s looks simple enough but there’s more to this topic than first meets the eye. What is meant by “pricing large insureds”? What exactly is “industry” data? How about some examples?.

lisayates
Download Presentation

Pricing Large Insureds Stewart Gleason Odyssey Reinsurance 1999 Seminar on Ratemaking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Pricing Large InsuredsStewart GleasonOdyssey Reinsurance1999 Seminar on Ratemaking

  2. It’s looks simple enough but there’s more to this topic than first meets the eye. • What is meant by “pricing large insureds”? • What exactly is “industry” data? • How about some examples?

  3. So, the first question:what exactly is meant by “pricing large insureds” anyway? Well, many possibilities come to mind but let’s run with these: • Developing a systematic approach to premium determination for a series of entities whose insurance programs are similar in structure. The approach allows the unique characteristics of a given entity to be incorporated into the analysis. • Developing ad hoc methodology for determination of the premium for a unique or unusual risk.

  4. Ohhh, you mean “individual risk rating”. That pretty much covers everything in the universe, doesn’t it? Just to name a few: • NCCI and ISO experience rating plans • NCCI and ISO retrospective rating plans • Customized retrospective rating procedures (e.g., determining the insurance charge from a risk specific aggregate loss distribution rather than a published table) • Experience rating models which require historical claim detail as an input (e.g., excess of loss reinsurance) • Traditional aggregate methods (i.e., incurred loss development) that use the insured’s experience exclusively • Highly detailed Monte Carlo simulation models

  5. The last time I checked, “large” did not mean the same thing as “individual”. What exactly does “large” have to do with it? • This is best addressed by answering another question: What is a small insured? Let’s try this: “A small insured is one whose accumulated experience is valueless as a predictor of future experience. Its future loss costs cannot be analyzed without relying on the combined experience of a group of similar risks.” • Therefore, a large insured is… one whose accumulated experience is deemed sufficient to determine or significantly influence the cost of its own insurance. The predictive value of its experience is perceived to be high. • The words “deemed” and “perceived” are illuminating: deciding what is “large” is highly judgmental.

  6. But isn’t there some universal yardstick? • Definitely not: all of the usual measurements - premium volume, number of exposure units, expected loss volume or total insured value - are meaningless without a context. • It depends whether you’re talking about… • property or casualty insurance • whole accounts or selected parts of a segmented program • primary covers, direct excess or reinsurance • prospective or retrospective coverage

  7. Consider the following: • A certain urban hospital system has hundreds of millions of dollars in property values at risk but the property insurance cannot be rated on the system’s own experience. • The same hospital system has $75,000,000 in medical malpractice losses annually; this exposure can be rated based entirely on the historical data. • Exxon has commercial automobile liability exposure from company cars, vans and gasoline tank trucks. This involves lots of money but is so “predictable” they probably don’t even insure it (except for the high excess exposure). • Exxon has monolithic exposure from environmental disasters (as in the Exxon Valdez). This is not only unpredictable, it’s almost uninsurable. • The world abounds with insureds which at first glance are “large” but for which rating in isolation is essentially hopeless. • The conclusion? • low frequency/high severity exposures (as in huge properties and extreme catastrophes) give rise to “small” risks! • high frequency/low severity exposures (as in medical malpractice!) give rise “large” risks!

  8. On to the second question: what exactly is “industry” data? • By “blending in industry data”, we simply mean incorporating the experience of a larger body of insureds to which the entity in question belongs. • Take claim severity distributions, for example. The larger body of experience could be (concentrically): • insureds of the same class, territory and limit (company data) • insureds of same territory (company data) • insureds within the same state (company or bureau data) • countrywide experience (company or bureau data) • non-insurance sources, e.g. government databases • There is a hierarchy of sources. We want the best compromise between the effort required to assemble the “industry” data and its similarity to that of the insured.

  9. Example: Ad Hoc Methodology for a Unique Risk • Situation: a primary insurer issues a tail policy to an HMO that is reorganizing. • Insurer is a single state physicians & surgeons malpractice carrier. • Policy covers occurrences after 1/1/78 reported after 12/31/98. • Policy has a $10 million per occurrence limit. • Insurer is seeking an 80% quota share. • Expected losses are somewhere between $15,000,000 and $20,000,000 • Three questions: • How many occurrences are left to be reported? • How many will close with a payment? • How much will each payment be?

  10. Ad Hoc Methodology for a Unique Risk (cont’d) • Claim count analysis Triangle is not robust enough to produce claim count development factors (but factors can be applied to the diagonal): • Primary insurer may use own data: count triangle for general practitioners and nonsurgical specialties only, all company, etc. • Reinsurer may use “all company” from annual statement (state specific) or all industry (annual statement aggregates) • Severity analysis • Risk’s own (closed) claims: not enough to fit, incomplete sample • Severity study of (subset of) insurer data: special study may already exist; can at least compare ultimate average severity • ISO Paretos: too long ($535,000 per CWI claim at $10M limit) • St. Paul lognormals: given by injury type, can be remixed

  11. Ad Hoc Methodology for a Unique Risk (cont’d) • Aggregate loss analysis • use your favorite aggregate loss method (simulation, Heckman-Meyers, etc.) to produce aggregate loss distribution • price to 80th percentile, perhaps • Problem: how do you get a severity distribution for late reporting claims only? • ISO now fits distributions to claims by lag; designing a model to make use of this is literally a research project • ad hoc: use distribution conditioned on claim being greater than $25,000 (this increases the expected value of indemnity from $281,000 to $367,000 per CWI claim - which is still considerably less than ISO’s $535,000)

  12. Example: Somewhere Between Ad Hoc and Systematic Procedures • Situation: Insuring Exxon for oil spill cleanups (or an airline for aviation accidents or an auto manufacturer for recalls) • If Exxon doesn’t have enough data, who does? The “oil industry” in total: round up the cleanup cost for every known spill and do your best to adjust them to future cost level • Extreme value theory provides a wealth of modeling tools to create a severity distribution from this “oil industry” data. We are only interested in events that exceed certain thresholds (throw out the nuisance oil spills - Environment Canada has a database of 681 spills excluding anything less than 1,000 barrels - see www.etcentre.org).

  13. Between Ad Hoc and Systematic Procedures (cont’d) • What about frequency? The industry frequency is based simply on the number of events exceeding the threshold and the Poisson process assumptions. Exxon’s frequency can be estimated by scaling based on barrels produced or transported, revenues, etc. • Extreme Value Theory references • Gary Patrik and Farrokh Guiahi, 1998 Spring Meeting talk: http://www.casact.org/library/patrik.pdf (additional references here) • Alexander McNeil: ASTIN Bulletin paper, fitting software for S-Plus/UNIX at http://www.math.ethz.ch/~mcneil/ • Xtremes software: http://www.xtremes.de • search the Euroweb: go to www.ethz.ch and use Eurospider search engine on “extreme value theory”

  14. Example: A Systematic Procedure • Situation: Desire an experience rating model for physicians groups and clinics • Should produce a modification factor based on • expected cost by class, territory, limit, etc. • developed ultimate cost for insured • appropriate weight or credibility to average them • Determine ultimate losses for insured by applying backwards recursive factors to individual open claims • backwards recursive factors gotten from “industry” data • “industry” data means company data from a larger group of insureds (possibly all company) and bureau data but not annual statement data (depends too heavily on case reserving practices).

  15. A Systematic Procedure (cont’d) • Backwards recursive procedure is preferred because it allocates development to open claims only • This procedure can also be used to test SIR credits for same insureds against manual credits. • Develop individual claims and “layer” out; compare indicated loss elimination ratio to LER underlying manual credit • Problems: • Credibility: how to determine for modification procedure? • What is a statistically significant difference between observed and expected SIR credit? • Allocates average development to open claims when really some will CW/OP and others will go to limits • When limits are hit, an iterative procedure must be used to redistribute development in excess of policy limits

More Related