280 likes | 390 Views
Reserve Uncertainty 1999 CLRS. by Roger M. Hayne, FCAS, MAAA Milliman & Robertson, Inc. Reserves Are Uncertain?. Reserves are just numbers in a financial statement What do we mean by “reserves are uncertain?” Numbers are estimates of future payments Not estimates of the average
E N D
Reserve Uncertainty1999 CLRS by Roger M. Hayne, FCAS, MAAA Milliman & Robertson, Inc.
Reserves Are Uncertain? • Reserves are just numbers in a financial statement • What do we mean by “reserves are uncertain?” • Numbers are estimates of future payments • Not estimates of the average • Not estimates of the mode • Not estimates of the median • Not really much guidance in guidelines • Rodney’s presentation will deal with this more
Let’s Move Off the Philosophy • Should be more guidance in accounting/actuarial literature • Not clear what number should be booked • Less clear if we do not know the distribution of that number • There may be an argument that the more uncertain the estimate the greater the “margin” • Need to know distribution first
“Traditional” Methods • Many “traditional” reserve methods are somewhat ad-hoc • Oldest, probably development factor • Fairly easy to explain • Subject of much literature • Not originally grounded in theory, though some have tried recently • Known to be quite volatile for less mature exposure periods
“Traditional” Methods • Bornhuetter-Ferguson • Overcomes volatility of development factor method for immature periods • Needs both development and estimate of the final answer (expected losses) • No statistical foundation • Frequency/Severity (Berquist, Sherman) • Also ad-hoc • Volatility in selection of trends & averages
“Traditional” Methods • Not usually grounded in statistical theory • Fundamental assumptions not always clearly stated • Often not amenable to directly estimate variability • “Traditional” approach usually uses various methods, with different underlying assumptions, to give the actuary a “sense” of variability
Basic Assumption • When talking about reserve variability primary assumption is: Given current knowledge there is a distribution of possible future payments (possible reserve numbers) • Keep this in mind whenever answering the question “How uncertain are reserves?”
Some Concepts • Baby steps first, estimate a distribution • Sources of uncertainty: • Process (purely random) • Parameter (distributions are correct but parameters unknown) • Specification/Model (distribution or model not exactly correct) • Keep in mind whenever looking at methods that purport to quantify reserve uncertainty
Why Is This Important? • Consider an example • “Usual” development factor projection method • Assume: • Reserves can be estimated by development factor method • Age-to-age factors lognormal • Age-to-age factors independent • You can estimate age-to-age parameters using observed factors
Conclusions • Use “customary” parameterization of lognormal (based on transformed normal) • Parameters for distribution of age-to-age factors can be estimated using: • i = Average of logs of observed age-to-age factors • i2= (Sample corrected) variance of logs of observed age-to-age factors
Conclusions • Given assumptions distributions of age-to-ultimate factors are lognormal with parameters: • i • i2 • Given amounts to date one derives a distribution of possible future payments for one exposure year • Convolute years to get distribution of total reserves
Sounds Good -- Huh? • Relatively straightforward • Easy to implement • Gets distributions of future payments • Job done -- yes? • Not quite • Why not?
An Example • Apply method to paid and incurred development separately • Consider resulting distributions • What does this say about the distribution of reserves? • Which is correct?
What Happened? • Conclusions follow unavoidably from assumptions • Conclusions contradictory • Thus assumptions must be wrong • Independence of factors? Not really (there are ways to include that in the method) • What else?
What Happened? • Obviously the two data sets are telling different stories • What is the range of the reserves? • Paid method? • Incurred method? • Extreme from both? • Something else? • Main problem -- the method addresses only one method under specific assumptions
What Happened? • Not process (that is measured by the distributions themselves) • Is this because of parameter uncertainty? • No, can test this statistically (from normal distribution theory) • If not parameter, what? What else? • Model/specification uncertainty
Why Talk About This? • Almost every paper in reserve distributions considers • Only one method • Applied to one data set • Only conclusion: distribution of results from a single method • Not distribution of reserves
Discussion • Some proponents of some statistically-based methods argue analysis of residuals the answer • Still does not address fundamental issue; model and specification uncertainty • At this point there does not appear much (if anything) in the literature with methods addressing multiple data sets
Moral of Story • Before using a method, understand underlying assumptions • Make sure what it measures what you want it to • The definitive work may not have been written yet • Casualty liabilities very complex, not readily amenable to simple models
All May Not Be Lost • Not presenting the definitive answer • More an approach that may be fruitful • Approach does not necessarily have “single model” problems in others described so far • Keeps some flavor of “traditional” approaches • Some theory already developed by the CAS (Committee on Theory of Risk, Rodney Kreps, Chairman)
Collective Risk Model • Basic collective risk model: • Randomly select N, number of claims from claim count distribution (often Poisson, but not necessary) • Randomly select N individual claims, X1, X2, …, XN • Calculate total loss as T = Xi • Only necessary to estimate distributions for number and size of claims • Can get closed form expressions for moments (under suitable assumptions)
Adding Parameter Uncertainty • Heckman & Meyers added parameter uncertainty to both count and severity distributions • Modified algorithm for counts: • Select from a Gamma distribution with mean 1 and variance c (“contagion” parameter) • Select claim counts N from a Poisson distribution with mean • If c < 0, N is binomial, if c > 0, N is negative binomial
Adding Parameter Uncertainty • Heckman & Meyers also incorporated a “global” uncertainty parameter • Modified traditional collective risk model • Select from a distribution with mean 1 and variance b • Select N and X1, X2, …, XN as before • Calculate total as T = Xi • Note affects all claims uniformly
Why Does This Matter? • Under suitable assumptions the Heckman & Meyers algorithm gives the following: • E(T) = E(N)E(X) • Var(T)= (1+b)E(X2)+2(b+c+bc)E2(X) • Notice if b=c=0 then • Var(T)= E(X2) • Average, T/N will have a decreasing variance as E(N)= is large (law of large numbers)
Why Does This Matter? • If b 0 or c 0 the second term remains • Variance of average tends to (b+c+bc)E2(X) • Not zero • Otherwise said: No matter how much data you have you still have uncertainty about the mean • Key to alternative approach -- Use of b and c parameters to build in uncertainty
If It Were That Easy … • Still need to estimate the distributions • Even if we have distributions, still need to estimate parameters (like estimating reserves) • Typically estimate parameters for each exposure period • Problem with potential dependence among years when combining for final reserves
CAS To The Rescue • CAS Committee on Theory of Risk commissioned research into • Aggregate distributions without independence assumptions • Aging of distributions over life of an exposure year • Paper on the first finished, second nearly so • Will help in reserve variability • Sorry, do not have all the answers yet