1 / 62

Modelling Multiple Lines of Business: Detecting and using correlations in reserve forecasting.

Modelling Multiple Lines of Business: Detecting and using correlations in reserve forecasting. Presenter: Dr David Odell Insureware, Australia. MULTIPLE TRIANGLE MODELLING ( or MPTF ). APPLICATIONS MULTIPLE LINES OF BUSINESS- DIVERSIFICATION? MULTIPLE SEGMENTS MEDICAL VERSUS INDEMNITY

davismary
Download Presentation

Modelling Multiple Lines of Business: Detecting and using correlations in reserve forecasting.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modelling Multiple Lines of Business: Detecting and using correlations in reserve forecasting. Presenter: Dr David Odell Insureware, Australia

  2. MULTIPLE TRIANGLE MODELLING ( or MPTF ) APPLICATIONS • MULTIPLE LINES OF BUSINESS- DIVERSIFICATION? • MULTIPLE SEGMENTS • MEDICAL VERSUS INDEMNITY • SAME LINE, DIFFERENT STATES • GROSS VERSUS NET OF REINSURANCE • LAYERS • CREDIBILITY MODELLING • ONLY A FEW YEARS OF DATA AVAILABLE • HIGH PROCESS VARIABILITY MAKES IT DIFFICULT TO ESTIMATE TRENDS

  3. BENEFITS • Level of Diversification- optimal capital allocation by LOB • Mergers/Acquisitions • Writing new business- how is it related to what is already on the books?

  4. Model Displays for LOB1 and LOB3

  5. LOB1 and LOB3 Weighted Residual Plots for Calendar Year

  6. Pictures shown above correspond to two linear models, which described by the following equations Without loss of sense and generality two models in (1) could be considered as one linear model:

  7. Which could be rewritten as For illustration of the most simple case we suppose that size of vectors y in models (1) are the same and equal to n, also we suppose that In this case

  8. For example, when n = 3

  9. There is a difference between linear models in (1) and linear model (2). In (1) we model separately and do not use additional information from related trends, which we can do in model (2). To extract this additional information we need to use proper methods to estimate the vector of parameters.The general least squares (GLS) estimation equation enables us to achieve this.

  10. However, it is necessary immediately to underline that we do not know elements of the matrix and we have to estimate them as well. So, practically, we should build iterative process of estimations and this process will stop, when we reach estimations with satisfactory statistical properties.

  11. There are some cases, when model (2)provides the same results as models in (1). They are: • Design matrices in (1) have the same structure ( they are the same or proportional to each other ). • Models in (1) are non-correlated, another words However in situation when two models in (1) have common regressors model (2) again will have advantages in spite of the same structure of design matrices.

  12. Correlation and Linearity The idea of correlation arises naturally for two random variables that have a joint distribution that is bivariate normal. For each individual variable, two parameters a mean and standard deviation are sufficient to fully describe its probability distribution. For the joint distribution, a single additional parameter is required – the correlation. If X and Y have a bivariate normal distribution, the relationship between them is linear: the mean of Y, given X, is a linear function of X ie:

  13. How is the distribution of Y affected by this new information?

  14. Y|X = x has a normal distribution or and

  15. The slope  is determined by the correlation , and the standard deviations and : where The correlation between Y and X is zero if and only if the slope is zero. Also note that, when Y and X have a bivariate normal distribution, the conditional variance of Y, given X, is constant ie not a function of X:

  16. This is why, in the usual linear regression model Y =  + X + the variance of the "error" term does not depend on X.However, not all variables are linearly related. Suppose we have two random variables related by the equation where T is normally distributed with mean zero and variance 1. What is the correlation between S and T ?

  17. Linear correlation is a measure of how close two random variables are to being linearly related. In fact, if we know that the linear correlation is +1 or -1, then there must be a deterministic linear relationship Y =  + X between Y and X (and vice versa).If Y and X are linearly related, and f and gare functions, the relationship between f( Y ) and g( X ) is not necessarily linear, so we should not expect the linear correlation between f( Y ) and g( X ) to be the same as between Y and X.

  18. A common misconception with correlated lognormals Actuaries frequently need to find covariances or correlations between variables such as when finding the variance of a sum of forecasts (for example in P&C reserving, when combining territories or lines of business, or computing the benefit from diversification). Correlated normal random variables are well understood. The usual multivariate distribution used for analysis of related normals is the multivariate normal, where correlated variables are linearly related. In this circumstance, the usual linear correlation ( the Pearson correlation ) makes sense.

  19. However, when dealing with lognormal random variables (whose logs are normally distributed), if the underlying normal variables are linearly correlated, then the correlation of lognormals changes as the variance parameters change, even though the correlation of the underlying normal does not.

  20. All three lognormals below are based on normal variables with correlation 0.78, as shown left, but with different standard deviations.

  21. We cannot measure the correlation on the log-scale and apply that correlation directly to the dollar scale, because the correlation is not the same on that scale.Additionally, if the relationship is linear on the log scale (the normal variables are multivariate normal) the relationship is no longer linear on the original scale, so the correlation is no longer linear correlation. The relationship between the variables in general becomes a curve:

  22. Note that the correlation of Y1 and Y2 does not depend on the m’s When the standard deviations are close to zero it is just below r but decreases further as s.d.’s increase.

  23. Weighted Residual Plots for LOB1 and LOB3 versus Calendar Years What does correlation mean?

  24. Model Displays for LOB1 and LOB3 for Calendar Years

  25. Model for individual iota parameters

  26. There are two types of correlations involved in calculations of reserve distributions. Weighted Residual Correlations between datasets: 0.35 – is weighted residual correlation between datasets LOB1 and LOB3; Correlations in parameter estimates: 0.32 – is correlation between iota parameters in LOB1 and LOB3. These two types of correlations induce correlations between triangle cells and within triangle cells.

  27. Common iota parameter in both triangles

  28. Two effects: Same parameter for each LOB increases correlations and CV of aggregates Single parameter for each line reduces CV of aggregates

  29. Forecasted reserves by accident year, calendar year and total are correlated Indicates dependency through residuals’ and parameters’ correlations Indicates dependency through parameter estimate correlations only

  30. Dependency of aggregates in aggregate table In each forecast cell and in aggregates by accident year and calendar year (and total) Var(Aggregate) >> Var(LOB1) + Var(LOB3). Correlation between reserve distributions is 0.82

  31. Payment Stream Illustration Then

  32. Quantiles for aggregate of both lines according to model which accounts for the correlation. Quantiles for aggregate based on summing two independent models. (Note that these values are lower.)

  33. Simulations from lognormals correlated within LOB and between LOBs Diagnostic for validity of using parametric distributions for forecast total.

  34. GROSS VERSUS NET Is your outward reinsurance program optimal? (eg. Excess of Loss Reinsurance) Model Display 1 6% stable trend in calendar year

  35. Model Display 2 Zero trend in calendar year. Therefore ceding part that is growing!

  36. Weighted Residual Covariances Between Datasets Weighted Residual Correlations Between Datasets

  37. Weighted Residuals versus Accident Year Gross

  38. Weighted Residuals versus Accident Year Net

  39. Weighted Residual Normality Plot

  40. Accident Year Summary CV Net > CV Gross – Why? It is not good for cedant?

  41. Example of Risk Capital Allocation as a function of correlation

  42. Simple example with 2 LOBs • LOB1 Mean = 250 SD = 147 • LOB2 Mean = 450 SD = 285 Total of means = Mean Reserve = 700

  43. Total losses for combined LOBs The yellow cells are those in which the mean reserve has been exceeded. The red numbers show the amount of this excess.

  44. Probabilities in matrix of outcomes depend on the correlation between the two LOBs “Correlation” =1.0 “Correlation” =0.0 Very poor diversification Good diversification

  45. Probabilities in matrix of outcomes depend on the correlation between the two LOBs “Correlation” =1.0 “Correlation” =0.0 Probability (Loss > Mean Reserve) = 1/3 Amount of excess loss = 550 LOB1 loss = 450 = 200 + mean(LOB1) LOB2 loss = 800 = 350 + mean(LOB2) Contribution to loss. LOB1:LOB2 = 1:1.75

  46. Probabilities in matrix of outcomes depend on the correlation between the two LOBs “Correlation” =1.0 “Correlation” =0.0 Probability (Loss > Mean Reserve) = 1/3 Mean Excess loss = 550 (E.S. at 67%) LOB1 loss = 450 = 200 + mean(LOB1) LOB2 loss = 800 = 350 + mean(LOB2) Contributions to loss: LOB1:LOB2 = 1:1.75 Probability (Loss > Mean reserve)=4/9 Mean Excess loss = 312.5 (E.S. at 67%) LOB1 Cond. Exp. Shortfall = 50 LOB2 Cond. Exp. Shortfall = 262.5 Contributions to mean loss: LOB1:LOB2 = 1:5.25

  47. Concept of Conditional Expected Shortfall • Loss for LOBi = li i= 1,2,3.. • Expected Shortfall at nth percentile = E(li -K| li > K), where Pr(li < K)=n/100. • Assume LOBi is reserved at ri . Conditional Expected Shortfall at nth percentile = E(li - ri | l1 +l2+l3 ..> K), where Pr(l1 +l2+l3 ..< K)=n/100.

  48. Risk Capital allocation should depend on the expected shortfalls (ES) of the LOBs under a given scenario. The contribution of the two lines depends on their correlation as well as the percentile at which we are calculating the ES.

More Related