1 / 36

A Method for Projecting Individual Large Claims

Overview. Rationale for considering individual claimsOutline of methodologyExamplesData RequirementsAssumptionsWhole account variabilityCase StudyConclusion. Rationale for Considering Individual Claims. Last few years has seen a significant change in requirements from actuaries in terms of un

axelle
Download Presentation

A Method for Projecting Individual Large Claims

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


    1. A Method for Projecting Individual Large Claims Casualty Loss Reserving Seminar 11-12 September 2006 Atlanta

    2. Overview Rationale for considering individual claims Outline of methodology Examples Data Requirements Assumptions Whole account variability Case Study Conclusion

    3. Rationale for Considering Individual Claims Last few years has seen a significant change in requirements from actuaries in terms of understanding variability around results Partially driven by a greater understanding by board members that things can go wrong, and partly by the increased use of DFA models Much work done based on aggregate triangles, but very little on stochastic individual claims development Weaknesses in methods for deriving consistent gross and net results

    4. Traditional Netting Down Methods How do you net down gross reserves? Could assume reinsurance ultimate reserves = reinsurance current reserves Prudent if deficiencies in reserves Optimistic if redundancies Analyse net data, and calculate net results from this Disadvantages: Retentions may change look at data on consistent retention lots of triangles! Ensuring consistency between gross and various nets difficult Indexation of retention need assumption of payment pattern Aggregate deductibles need assumption of ultimate position of individual claims Another option – model excess claims above a threshold, and calculate average deficiency of excess claims – i.e. IBNER on those above threshold. Apply average IBNER loading to open claims to get ultimate

    5. Deterministic Netting Down Methods Tend to Understand Effect of Reinsurance Example: excess IBNER of Ł0.5m, two claims of incurred of Ł250k, and retention of Ł500k Deterministic development factor of 2, so gross-up claims to ultimate of Ł500k each Calculate reinsurance recoveries: 500k-500k = 0 – no reinsurance recoveries Net reserves = gross reserves

    6. Deterministic Netting Down Methods Tend to Understand Effect of Reinsurance Because of the one-sided nature of reinsurance, this will understate the reinsurance recoveries: Above example: one claim settles for 250k, one for 750k same gross result Net reserves = gross reserves – 250k Need method that allows for distribution of ultimate individual claims to allow for reinsurance correctly

    7. Traditional Variability Methods Traditional Methods: Methods based on log(incremental data), i.e. lognormal models Mack’s model – based on cumulative data Provide mean and variance of outcomes only Bootstrapping Provides a full predictive distribution – not just first two moments Bootstrap any well specified underlying model Over-dispersed Poisson (England & Verrall) Mack’s model Characteristics Usually applied to aggregate triangles Works well with stable triangles However, large claims can influence volatility unduly Bayesian Methods: Like Bootstrapping, provides a full predictive distribution Ability to incorporate expert judgement with informative priors

    8. Traditional Variability Methods No allowance made for the number of large claims in an origin period, and no allowance made for the status (i.e. open/closed) No linkage between variability of gross and net of reinsurance reserves No information about the distribution of individual claims – will have same problems of netting down gross results as deterministic methods

    9. Outline of Methodology Our methodology simulates large claims individually Separately simulate known claims (for IBNER) and IBNR claims Consider dependencies between IBNER and IBNR claims For non-large claims, use an aggregate “capped” triangle when a individual claim reaches the capping level, ignore any development in excess of the capping index the capping threshold at an appropriate level use a “traditional” stochastic method consider dependency between the run-off of capped and excess claims

    10. Outline of Methodology: IBNER Take latest incurred position and status of claim Simulate next incurred position and status of claim based on movement of a similar historic claim Allows for re-openings, to the extent they are in the historic data Projects individual claims from the point they become “large” Claims are considered “similar” by: Status of claim (open / closed) Number of years since a claim became large (development period) Size of claim – e.g. a claim with incurred of Ł10m will behave differently to a claim with incurred of Ł1m – claims are banded into layers

    11. Outline of Methodology: IBNR IBNR large claims can be either genuine IBNR, or claims previously not reported as large Apply “standard” stochastic methods to numbers triangles Alternatively, simulate based on an assumed frequency per unit of exposure For severity, can sample from the (simulated) known large claims, or simulate from an appropriately parameterised distribution

    12. Example Data

    13. Claim D Need to simulate into development period 3 Open status as at development period 2 Similar to claims B and C, with development factors of 0.53 and 1.5

    14. Claim D: Simulations

    15. Claim E Closed status as at development period 2 Similar to claim A, with no development

    16. Claim F Open status as at development period 1 For development into year 2, can consider any of A to E Consider also the status

    17. Claim F Simulations to Year 2

    18. Claim F Simulations to Year 3

    19. IBNR Claims Two sources of IBNR claims: True IBNR claims Known claims which are not yet large Triangle of claims that ever become large Calculate frequency of large claims in development period Simulate number of large claims going forward Simulate IBNR claim costs from historic claims that became large in that period

    20. IBNR Data below shows the claim number triangle, and frequency of claims

    21. IBNR Result for one simulation

    22. Data Requirements Individual large claim information: Full incurred and payment history Historic open status of claims Claims that were ever large, not just currently large Accident year exposure Definition of “large” depends on: Historic retentions Number of claims above threshold Consider having two thresholds – e.g. all claims above $100k, but then calculate excess above $200k – allows for claims developing just below the layer

    23. Assumptions Historic claims provide the full distribution of possible chain ladder factors for claims Development by year is independent No significant changes to case estimation procedures Can allow for this by standardising the historic chain ladder factors, as is done in aggregate modelling Historic reopening and settlement experience is representative of future Method cannot be applied blindly – it is not a replacement of gross aggregate best estimate modelling, rather a tool to analyse variability around the aggregate modelling, and netting down of results

    24. Variability of Whole Account Simulate variability of small claims via “capped” triangle, using existing methods Capped triangles preferred to triangles which totally exclude large claims if claims are taken out once they become large, we see negative development if history of claim is taken out, then triangles change from analysis to analysis becomes difficult to allow for IBNR large claims Add gross excess claims from individual simulations for total gross results, with appropriate dependency structure Add net excess claims for total net results

    25. Case Study UK auto account 16 years of data Individual claims > Ł100k 2 layers used to simulate IBNER claims, 80% in lower layer, 20% in upper layer

    26. IBNER Distribution of one individual claim, current incurred Ł125k Expected ultimate of Ł300k 90% of the time, ultimate cost of claim doesn’t exceed Ł700k

    27. IBNER Occasionally the claim can grow very large, however

    28. IBNER Progression of one claim that has been large for 4 years, and is still open Still significant variability in ultimate cost

    29. Ultimate Loss Development Factors Graph shows ultimate LDF (ultimate / latest incurred) for “big” and “little” claim from same point in development Probability of observe an large LDF (>4) 60% higher for small claim than large claim Average LDF for small claim 1.1, for big claim 0.87

    30. Distribution of Capped Reserve

    31. Comparison with Mack Method

    32. 2003 Distribution Higher proportion of large claims One claim of Ł6m Greater uncertainty than implied by aggregate projection

    33. 2004 and 2005 Distributions Distributions from individual claims distributions slightly heavier tailed than aggregate method Caused by increase in large claims proportions over time, not adequately allowed for in aggregate methods

    34. Netting Down

    35. Reinsurance Structures Even simple portfolios can have reinsurance structures that are difficult to model Aggregate Deductibles Loss Occurring During vs Risk Attaching coverages Partial Placements Indexation Clauses By having individual claims, can explicitly allow for any structure

    36. Example: Aggregate Deductible Graph shows percentile chart of the usage of a Ł2.25m aggregate deductible attaching to layer Ł400k XS Ł600k

    37. Conclusion Existing stochastic methods work well for homogenous data, but some lines of business are dominated by small number of large claims Treating these claims separately allows existing methods to be used on the attritional claims, with our individual claims simulation technique allowing for variability in these large claims explicitly This allows net and gross results to be calculated on a consistent basis, allowing explicitly for any reinsurance structures in place

More Related