Chapter 16: Stochastic Reserving PDF

Summary

This document provides an overview of stochastic reserving, discussing traditional methods, sources of uncertainty in claims reserves, and different approaches to quantify that uncertainty. It covers analytical and simulation methods for reserves and their uses. This document also explains how model error and process error impact reserve estimations.

Full Transcript

# Chapter 16: Stochastic Reserving ## Syllabus Objectives - Stochastic reserving processes including: - Uses of stochastic reserving methods - Likely sources of reserving uncertainty - Types of stochastic reserving methods: - Analytic methods - Simulation-based methods...

# Chapter 16: Stochastic Reserving ## Syllabus Objectives - Stochastic reserving processes including: - Uses of stochastic reserving methods - Likely sources of reserving uncertainty - Types of stochastic reserving methods: - Analytic methods - Simulation-based methods - Mack's model and the ODP model - Applying bootstrapping to these two models - Issues, advantages and disadvantages of each of the models - Aggregate the results of stochastic reserving across multiple lines of business, and methods of correlation. ## Introduction - Traditional claims reserving methods, like the chain ladder method, produce a single best estimate of the claims reserve. - Stochastic reserving methods provide a confidence interval, and allow for the assessment of the likely error in using the single best estimate. - Stochastic models produce a best estimate that is either the same as, or very close to, the best estimate derived from the chain ladder method. - They provide information about the distribution of the reserves. - The run-off of claims reserves can be considered a random process, with many random factors influencing the outcome. These uncertain factors include: - Occurrence and severity of claims - Notification delays on individual claims - Legal changes that affect the size of awards - Legal changes that affect the 'heads of damage' awarded. This can change the types of loss recognized in compensation awards for serious injuries (for example, loss of income, medical and nursing costs) - Changes in the litigiousness of society - Levels of claims inflation, which is often related to levels of price inflation and wage inflation in the economy - Court rulings on liability or quantum of individual claims not foreseen by claims handlers and/or not in the historical data. - Changes in the mix of claim types, either caused by an underlying change in claim type experience or by changes in the mix of business written - Changes in claims handling, either because of policy changes or because of external events, for example a catastrophe leading to claims handlers being over-stretched - The emergence of new types of claim - Changes in the way claims are settled, for example if more claims are settled in the form of a series of payments rather than as lump sums (in the UK this is referred to as a PPO). - Historical data is often used to project the run-off of claims. - The projection introduces further uncertainties because of a limited data sample that may be of variable quality. - Model uncertainty arises from the uncertainty introduced by deriving an estimate of the claims reserve. - Model error will be discussed in more detail in Section 3. ## The Core Focus of a Reserving Exercise - Determine a point estimate of the best estimate reserves. - Communicate the uncertainties surrounding the best estimates using a variety of methods (which can include stochastic reserving techniques). - Stochastic reserving techniques are commonly used to determine quantitative estimates of the volatility in reserves as an input to capital models. - It is no longer considered sufficient to describe the reserves using single estimates, without also gauging the size of the prediction errors that may be present in these estimates. ## Key Areas for Stochastic Reserve Calculation - The reasons for considering variability around estimated reserves and the uses to which we can put such results. - The main methods for determining variability, including analytical methods and simulation methods. - Methods for allowing for reinsurance. - Methods for allowing for the aggregation of results across different lines of business. - The relative strengths and weaknesses of the main stochastic methods. ## Uses of Stochastic Reserving - There is growing interest from all parties in the uncertainty of reserves. This includes users of actuarial work, who are interested in the uncertainty of reserves in addition to the point estimates. - They are interested in the impact of the uncertainty of on the capital backing the insurance liabilities and its sufficiency. ## Reasons Why the Single Best Estimate Approach is Not Sufficient - The possibility of bankruptcies arising as a result of catastrophe events. - An increasing awareness of reserving issues relating to latent claims. - Problems made worse by the adverse economic conditions. - A changing emphasis from regulators. - New guidance from the profession. ## Uses of Variability in Claims Reserve Calculations - Assess reserve adequacy in absolute and relative terms. - Some companies may hold precautionary margins in their reserves for one or more reasons, from regulation to prudence. These margins may be explicit or implicit. - By examining claim variability, we can provide management with information as to the strength of the reserves. - Compare the reasonableness of different sets of reserve estimates. - Compare datasets at different as at dates. - Monitor performance to see if claim movements are material. - Allocate capital (quantifying reserving risk is a key component of insurance companies' capital models). - Inform management / the Board of the insurance company to assist with ongoing decision making, for example in what areas to expand or contract the volume of business being written. - Provide information to investors. - Inform discussions with regulators. - Price insurance and reinsurance policies. ## Reserve Risk - Reserve risk can be defined as the risk in respect of financial losses that could arise if the actual claim payments required exceed the amounts reserved for. ## Communicating Results - Communicating the outputs of a stochastic reserving exercise is an important part of the process, as is communicating the limitations, assumptions and materiality of judgments made to derive the estimates. - The mathematical derivation of the results is complicated, but can be explained through simple measures such as graphs of the distribution of possible outcomes and tables of the key percentiles. ## Stochastic Reserving Methods - Most claims reserving methods are based on assumptions about the underlying shape of the claims run-off. - The assumptions usually define a mathematical model of the run-off. - The difference between stochastic and non-stochastic methods is that in stochastic methods, we model not only the underlying pattern of the claims run-off but also some of its variations. - In other words, we will be modelling the random variation around the chosen development pattern. ## Advantages of Stochastic Reserving - Estimate the reliability of the fitted model, and the likely magnitude of random variation. - Apply statistical tests to the modelling process to verify any assumptions and gain understanding of the variability of the claims process. - Develop models in which the influence of each data point in determining the fitted model depends on the amount of random variation within that data point. In other words, figures with large random components should have relatively little influence. ## Disadvantages of Stochastic Reserving - It takes more time - Requires a higher level of skill and training - The methods are more complicated, so the risk of mistakes is greater and they are harder to explain to a non-technical audience. - A considerable element of judgement is required in the choice of model and in selecting the prior distribution with Bayesian methods. - Using more sophisticated methods may lead to spurious accuracy and false confidence in the results. ## Types of Error - Model error arises because actuarial models are often a simplification of a complex (and unknown) underlying system, and the model being used may not fully reflect all features of the underlying process. - For example, the chain ladder model does not include an allowance for calendar year effects, introducing model error to the process. - This results in uncertainty in the estimates produced by the model. ## Parameter Error - Parameter error arises from the fact that the estimated parameters are random variables. - So even if we use the best possible data, the random variation inherent in the data means that we may interpret it incorrectly, and hence choose inappropriate parameter values (another name for parameter error is estimation error). ## Process Error - Process error reflects the inherent random noise in the process. - In other words, even if we have built a model which perfectly describes the real-world system, the actual result will (almost certainly) differ from our estimates due to random variation. ## Quantifying Error - Prediction variance = Estimation variance + Process variance. - The estimated standard deviation of the predicted value is referred to as the standard error. - The standard or prediction error of a model is the estimation error plus the process error. - For a random variable X with predicted value, the mean squared error of prediction (MSEP) is: - $$E[(X-x)²]$$ - The MSEP is also called the prediction variance, and its square root is known as the prediction error. ## Testing the Model - Use F tests to check the appropriateness of the number of parameters. - This means testing if we can remove one of the parameters without increasing the residual variability significantly. - Recall that F tests are used in analysis of variance to determine whether the residual variance can be considered to be purely random, or whether there are additional factors that need to be explicitly incorporated in the model. - Fit the model to old data. - This is a form of sensitivity testing; we carry out two projections, one including and one excluding the most recent year's data, and see if the results are broadly consistent. - Use plots or triangles of residuals. - For each data point, we have an observed and a fitted value. - The difference between these is the residual error. - Dividing this by the estimated amount of variance for the data point gives a standardised residual. - We expect the mean of these standardised residuals to be zero, and the variance to be constant. - Plots of the standardised residuals against items such as origin year and development year should show these standardised residuals to be randomly distributed, as opposed to being clustered in places. ## The Mean of Standardized Residuals - If the mean is not zero, then our model is biased. - We can improve it by adjusting the values of one or more of our parameter estimates. - (The equivalent in linear regression would be to change the intercept parameter, the constant in the straight line equation, to move the line up or down to achieve a closer fit.) ## The Variance of Standardized Residuals - If the variance is not constant, then our model is not capturing all the sources of randomness. - We can improve it by including an extra factor or by applying a function to one of the variables. - (The equivalent in linear regression would be to change to a multivariate model or to take logs of the original data values before fitting the model.) ## Examples of Stochastic Models - Stochastic reserving models can be broadly categorised as follows: - Analytical (or 'analytic') methods - Simulation methods - Bayesian methods. ## Analytical Models - The stochastic element is incorporated directly into the formulae or statistical distributions specifying the model. - No additional statistical calculations or assumptions about distributions are required. ## Bayesian Models - Use a prior distribution to model the input parameters, and then derive a posterior distribution for the results. ## Stochastic Claims Reserving Models | **Model** | **Method** | **Description** | |---|---|---| | **Deterministic Models** | | | | Chain Ladder | Analytical | | | Bornhuetter-Ferguson (BF) | Analytical | | | Average cost per claim | Analytical | | | Mack | Analytical | | | Over-dispersed Poisson (ODP) | Analytical | | | **Stochastic Models** | | | | Negative binomial | Analytical | | | Normal approximation to negative binomial | Analytical | | | Lognormal | Analytical | | | Hoerl curves | Analytical | | | Merz-Wüthrich | Simulation | | | Over-dispersed Poisson (bootstrap form) | Simulation | | | Bornhuetter-Ferguson (Bayesian form) | Bayesian | | ## Brief Descriptions of Each Model - The Mack model uses past claims data to derive estimates of the mean and variance of the total ultimate claims arising from each origin period. - It makes no assumption or prediction about the precise distributions involved, and so is described as a distribution-free model. - This model is discussed further in Section 5.2 below. - The Over-dispersed Poisson (ODP) model. - If claims occurred completely randomly, they would conform to a Poisson process, so that the claim numbers would have a Poisson distribution. - Since the variance of a Poisson distribution is the same as the mean, this might suggest using the same deterministic estimate (= best estimate) of the reserves to estimate the variance as well. - However, because the claim amounts are not constant, we find that the variance of the reserves is greater than the mean. - They are over-dispersed. - This is especially true for the large claims in the tail of the distribution, where the variation in size is greatest. - With the ODP model, the variance is estimated as x the deterministic estimate, where >1 is a constant multiplier estimated from the past data. - This model results in an analytical model. - This model is discussed further in Section 5.3 below. - The assumption with this model is that the claim amounts follow an ODP distribution (that is, that the variance is proportional to, but not necessarily the same as, the mean). - The ODP assumption of positive incremental claims limits its applicability to situations where there are negative incremental claims (if negative increments are a genuine feature of the underlying business then it may be more appropriate to use a different model). - An alternative approach to the ODP model is to 'bootstrap' the past data, ie to apply a Monte Carlo method using the randomness observed in the past claims data. This then becomes a simulation method. - This simulation approach is discussed further in Section 6 below. - The Negative binomial model - This model is similar to the ODP model, except that a negative binomial distribution is used instead. - The parameters required to estimate the mean and variance of the negative binomial distribution are estimated from the data. - This model also incorporates a factor to ensure that it is over-dispersed. ## Analytical Distributions - The first step in estimating the variability of reserves is to formulate a statistical model by making assumptions about the underlying process generating the data. - This can be done by specifying distributions for the data or just specifying the first two moments. - **The Distributions used to model claims amounts or claim numbers include:** - Over-dispersed Poisson (ODP) - Negative binomial - Normal approximation to negative binomial - Lognormal ## The Normal Approximation to the Negative Binomial Distribution - The normal approximation to the negative binomial has the advantage that it can handle reductions in claims (for example, savings in incurred claims due to reductions in case estimates or salvage and subrogation) when modelling incremental claim amounts. ## Salvage - Salvage: Amounts recovered by insurers from the sale of insured items that had become the property of the insurer by virtue of the settling of a claim. ## Subrogation - Subrogation: The substitution of one party for another as creditor, with a transfer of rights and responsibilities. - It applies within insurance when an insurer accepts a claim by an insured, thus assuming the responsibility for any liabilities or recoveries relating to the claim. - For example, the insurer will be responsible for defending legal disputes and will be entitled to the proceeds from the sale of damaged or recovered property. ## The Mack Model - Reproduces chain ladder estimates and makes limited assumptions about the distribution of the underlying data, specifying the first two moments only. - Key assumptions: - The run-off pattern is the same for each origin period (as for the chain ladder). - The future development of a cohort is independent of historical factors (eg, high factors in one period do not imply high or low factors in the following period). - The variance of the cumulative claims to development time t is proportional to the cumulative claims amount to time t-1. ## The Mack Model's Advantage - It produces standard errors for both individual origin periods and for all periods combined. - The formulae required for deriving the Mack standard errors are quite straightforward to implement in a spreadsheet. - The analytic formulae for the Mack model give the mean square error of prediction (MSEP) of the chain ladder estimate of the claims reserve for each individual origin period and the total reserve over all origin periods. - As we mentioned earlier, the Mack model uses the past claims data to derive estimates of the mean and variance of the total ultimate claims arising from each origin period. - The standard errors are the square roots of the estimates of these variances. ## Distribution Free - The Mack model is distribution-free, in that no distributional assumptions are made, only assumptions about the first two moments. - A full predictive distribution is not derived, although we often approximate this by fitting a log-normal distribution with the same mean and variance. ## Over-dispersed Poisson (ODP) Model - The ODP model is a generalisation of the Poisson model, which overcomes many of the limitations of the Poisson model, while retaining the same basic structure and the desirable feature that the reserve estimates are identical to those obtained using the Chain Ladder method. - It produces the same results as using the chain ladder method, but only generates non-negative integer values and has a variance equal to the mean. - It assumes: - The run-off pattern is the same for each origin period (as for the chain ladder) - Incremental claim amounts are statistically independent - The variance of the incremental claim amounts is proportional to the mean - The expected incremental claims are positive for all development periods. ## The Merz-Wüthrich Formula - Used for reserve risk estimation for Solvency II capital modelling, it is necessary to obtain an estimate for reserve uncertainty over a one-year time horizon. - The risk can be measured by estimating the uncertainty surrounding the claims development result (CDR), which is the difference between an estimate of the undiscounted ultimate claims cost made now, and an estimate made in a year's time, taking into account the claims development and emergence of new information during the year. - The CDR can therefore be thought of as the profit (or loss) in the reserves over a one-year time horizon. ## The Merz-Wüthrich Formula's Approach - It is an analytic approach that does not rely on simulation. - It relies on the same assumptions as the Mack model, except that it considers uncertainty over a one-year period, whereas the Mack model does so over the lifetime of the liabilities, meaning that it is effectively a one-year equivalent of the Mack model. - It is easy to implement in the same spreadsheet or programming framework as the Mack model. - Without adjustment, the method does not include the functionality required to include a tail factor and only produces an estimate of the uncertainty surrounding the CDR, as opposed to its full distribution. ## Simulation Methods - Most analytic methods do not derive a full distribution of outcomes. - They just give the mean and variance of the distribution. - In contrast, we can use simulation methods such as the Monte Carlo method to obtain predictive distributions of reserves. - We do not derive the full mathematical form of the distribution, but obtain sufficient information (such as percentile tables and frequency plots) to communicate results. ## Introduction to Bootstrapping - Bootstrapping involves sampling (with replacement) multiple times from an observed dataset to create a number of pseudo datasets. We can then refit the model to each new dataset and obtain a distribution of the parameters. - Bootstrapping is a generic process that we can apply to a wide range of statistical problems, provided the model is well-specified. ## Bootstrapping a Generalised Linear Model (GLM) - It is not a model but a procedure applied to a model. It can be applied to GLMs, Mack's model, or other regression-type problems. - With regression-type problems (and GLMs), the observations are not identically distributed because the means (and possibly variances) depend on each data point. - Therefore, it is common to bootstrap the residuals rather than the data points themselves, because the residuals are often assumed to be approximately independent and identically distributed. - The steps that should be followed when bootstrapping a GLM would be: - Define and fit a GLM, obtaining parameters and fitted values for the observed data. - Calculate the residuals of your fitted model. - Take a sample from the residuals (this is the 'bootstrapping bit'), and invert these to obtain a set of pseudo-data. - Refit the GLM using this pseudo-dataset, to obtain another set of parameters for the model, and another forecast output. - Repeat steps 3 and 4 many times to derive a forecast output for each pseudo-dataset. This gives a distribution of parameters and outputs. ## Bootstrapping the ODP Model - 'Bootstrapping' is often used to refer to bootstrapping the ODP model. - You may recall that a generalised linear model assumes that the data comes from the exponential family of distributions. One member of the exponential family is the over-dispersed Poisson (ODP) distribution. - 'Bootstrapping the ODP' means that we are fitting a GLM to the incremental claims data, using an ODP distribution as our underlying assumption, then bootstrapping the residuals using the five-stage process above. ## Bootstrapping the ODP Model's Key Assumptions - The variance is proportional to, but not necessarily the same as, the mean. - The key assumptions are the same as those from the analytical ODP model. - Negative incremental claims make bootstrapping the ODP model inappropriate. ## Bootstrapping the ODP Model - The Practice - The ODP model is widely bootstrapped because it is relatively straightforward to implement in a spreadsheet. - The process used to bootstrap reserve estimates consists of the following stages (repeated many times): 1. Calculate the expected values and the residuals for each point in the claims triangle. 2. Re-sample (with replacement) from the residuals to obtain a new triangle. 3. Re-fit the chain ladder model to the new triangle to obtain a revised reserve estimate. ## Bootstrapping Steps - Fit a model (eg, chain ladder) and calculate what the past claim amounts 'should' have been if they had conformed precisely to the model, with no random errors - in other words, if each figure in the past data had exactly followed the development factors we've estimated. This process is sometimes called 'back-fitting'. - The differences between the actual values and the fitted values then give the residuals. - In step 2, use the residuals and the fitted values to calculate a large number of possible alternative sets of past data incorporating the randomness present in the residuals. - In step 3, apply the same method (eg, chain ladder) to carry out projections for each of the sets of alternative past data. This gives us a distribution of the possible reserve estimates incorporating the randomness in the residuals. ## Bootstrapping and the ODP Model's Relationship to the Chain Ladder Method - The ODP model is a form of GLM, and we know we can bootstrap GLMs. - In the ODP model, the incremental triangle is assumed to follow an ODP distribution (ie, the variance of the incremental claim amounts is assumed to be proportional to the mean). - The expected values obtained happen to be exactly the same as the basic chain ladder estimates. - So, we can perform a sleight of hand and fit the chain ladder model at those steps (fitting a GLM in steps 1 and 4, but using a chain ladder instead). ## Estimating Parameter and Process Uncertainty from a Bootstrapped Model - Bootstrapping the ODP provides parameter variance. By simulating an observed claims pattern for each future cell from an appropriate distribution, we can estimate process variance as well. - The bootstrapping process gives an estimate of parameter uncertainty, which is determined by the scale parameter (the relationship between the mean and the variance in the ODP model). - The parameter uncertainty is also found by repeating step 4, since we obtain a set of re-fitted parameters for each dataset, which gives us a full distribution of parameters. - The combination of the parameter uncertainty and process uncertainty gives us the uncertainty of the projection: - Prediction variance = Estimation variance + Process variance ## Actuary-in-the-box - This is a method that begins with a best estimate reserve at the start of the year and a defined algorithm that has been used to derive those reserves and can be repeated at a future point in time. - The key purpose of the method is to produce an estimate of the reserve uncertainty over a one-year time horizon. - The algorithm might be a standard chain ladder procedure, or may involve other methods, such as Bornhuetter-Ferguson. - The algorithm may include other features, such as smoothing of development factors and the estimation of tail factors. ## The Actuary-in-the-box Method's Key Requirement - The algorithm must be repeatable without any element of subjective input. ## The Actuary-in-the-box Method's Key Steps - Simulate the claims development for the following year. - The methods used here can include bootstrapping or a parametric simulation from a statistical distribution. - Allowance can also be made for process error. - Reapply the algorithm to the data triangle including the additional year of 'simulated' claims development, to produce a best estimate of the reserves at the end of the following year. - The claims development result can then be calculated by looking at the difference between the estimates of ultimate claims at the start and the end of the year. - The process is repeated for a suitable number of simulations, whereby a different claims development for the following year is estimated under each simulation. - This produces a full empirical distribution for the claims development result, from which any required statistics, eg percentiles can be derived. - This can be considered for individual cohorts or across all cohorts combined. - The procedure can be enhanced further by making an explicit adjustment for inflation. - This approach can also be extended so that the claims development in more than one future year is simulated. - This enables the uncertainty in the claims development result for each year between now and ultimate to be estimated. - This approach is superior to the Merz-Wüthrich formula described above as it can be used to derive the full distribution of the claims development result in each future year, and it allows for a tail factor to be incorporated in the process (also known as the 're-reserving' approach). ## Recognition or Emergence Pattern Methods - When the ultimate reserve risk has been estimated or derived from external sources the emergence pattern method can be applied to estimate the reserve risk over a one-year horizon. - It is based on the observation that the ultimate risk must emerge over time between now and ultimate. - The method relies upon being able to estimate the proportion of the ultimate reserve risk that emerges over the next year. - Typically for short-tail classes the ultimate risk emerges quickly, whereas for longer-tail classes the emergence is slower. ## The Emergence Pattern Method's Uses - The method might be used when there is insufficient data to allow analytical (eg the Merz-Wüthrich formula) or simulation methods (eg the actuary-in-the-box method) to be used to derive the one-year reserve risk. ## Estimating the Emergence Pattern - Express the ultimate reserve risk as the difference between the reserves at the 99.5th percentile and the best estimate reserves. - Use the coefficient of variation of the reserves on an ultimate time-horizon basis and finding a way to adjust this so that it is on a one-year basis. - Applying one or more one-year reserve risk methods and combining them with an estimate of the ultimate reserve risk to derive an emergence pattern, using smoothing if required. - Using suitable industry benchmarks or other data. - Using an estimated claims payment pattern. - Using an estimated claims payment pattern, but adding an element of stochastic variation by assuming that the proportion paid in each year follows some chosen statistical distribution. ## Aggregation Across Multiple Lines of Business and Correlations - In the methods discussed so far, we consider a single line of business and derive a distribution of possible outcomes. From the overall financial perspective of a company, we need an aggregate distribution covering all lines of business. - We can use analytical methods to aggregate across distributions. - However, this can be difficult. Simulation methods provide a much simpler framework on which we can aggregate across lines of business. - After simulating the run-off on a class-by-class basis, we can then sum across lines of business and origin periods by simulation. - However, we need to consider dependencies between lines of business and origin periods. - Otherwise, we are assuming that the run-off between lines and origin periods is independent, and (on the basis that these are positively correlated) this will underestimate the variability of the aggregate distribution. ## Reasons for Correlation/Dependence - They are impacted by similar events, (eg, a windstorm could impact both household and commercial property accounts). - Legal changes often affect several lines of business, (eg, a change to the Ogden discount rate would affect both employers' liability and motor classes). - Inflationary trends will affect many adjacent origin periods. - The same claims team may handle claims from several lines of business and so changes to claims handling may impact more than one line. - Problems with data may affect more than one line of business. ## Dependencies - Dependencies are usually modelled using a copula and correlation matrix. - Copulas will be introduced again later in the course. ## Copula and Correlation Matrix - A copula is a way of building a multivariate distribution such that dependencies of the underlying variables are represented. - Some copulas require a correlation matrix to be specified (eg, Gaussian copula and t-copula) but others do not (eg, Gumbel copula and Clayton copula). - They are a more flexible (and complex) way of modelling multiple dependencies, rather than using single correlations. ## User Requirements - The user must specify: - Underlying loss distributions for the classes of business or origin periods. - A two-way correlation matrix between all distributions. - The form of the copula. ## The Form of the Copula - Describes how the copula links the underlying distributions. For example, a Gumbel copula (described further later in the course) gives a strong correlation between the tails and is also non-symmetric, making it suitable for many insurance applications. - It is usual to correlate the origin periods and lines of business separately to simplify the process of correlation. ## Issues Surrounding Stochastic Reserving - Stochastic reserving methods are limited by the quality of the underlying assumptions. ## Model Form Mismatches - There can be mismatches between the type of model and the data to be used. The user should therefore take care to ensure the data is appropriate for the form of the model being used. - A common limitation that restricts the use is negative increments in the claims data; some examples of the limitations this causes in different models are discussed below. - For example, for log-normal models, we must ignore any negative increments (because we take the log of the incremental movements). - Generally, this is not a problem for paid claims triangles (unless there are significant salvage or subrogation recoveries), but this method often does not work well for incurred claims data, where there are likely to be more instances of negative increments. - The ODP model is slightly more flexible because individual negative increments for any development period are possible, provided the development factor across the development period as a whole is greater than one. - The Mack model is very flexible in its model form because it allows negative increments and development factors less than one across a whole development period. However, while the Mack model can address this particular model form limitation, as with all methods that are based upon chain ladder methodology, it is unable to allow for calendar year effects, and users should consider the limitations in the models being selected. ## Data Adjustments - In some circumstances, data adjustments can be made to address these problems. ## Latent Claims - The stochastic methods described above tend not to be suitable for certain types of claims, in particular latent claims, since they are only able to reflect the variability reflected in the claims data available. - A key feature of latent claims is that, by their very nature, we don't know how they are going to develop in the long run. ## Possible Alternatives to Modeling Latent Claims - Use an exposure-based method where assumptions are made about the volatility of the number of future claims, and the average cost of future claims. - In other words, we model the distributions of the claim numbers and the average claim amounts separately, and then combine them to find the distribution of the total claim amount. - The issue of latent claims and the fact that they are not reflected in past data are arguably part of a more general point about any different features not already reflected in the claims data and that the analytical methods will not capture this kind of variability. ## Under-estimation of Variability - There is consensus that many of the methods described here tend to underestimate the underlying variability of reserves. - For example, the central assumption of the Mack method of unchanged development patterns for different origin periods often does not hold in practice. - More generally, the historical data may not capture all sources of variability to which the reserves may be subjected in the future (eg, potential changes in the Ogden discount rate, one-off increases in claims costs arising from court judgments, or a prolonged period of above average inflation). - When using the methods described here, it is important to use judgement and not to accept the results of any one method without question. ## Stochastic Reserving in Practice - An assessment of the reasonableness or validity of the results is an essential stage of the overall process of applying stochastic reserving methods before they are communicated to the interested parties. - Recent trends have seen a move towards scenario-based approaches to quantifying uncertainty, as it is easier to communicate and can be tailored to show more tangible and real illustrations of uncertainty than can be achieved with stochastic reserving. - Stochastic reserving remains heavily used within capital modelling, as the distributions produced are necessary for the capital model. - The detail of any assessment will depend on the methods being used and the purpose of the exercise, and is likely to involve the application of judgement and experience. - Where the results are being used to estimate reserves at higher percentiles, eg, 99.5th, then it is particularly important to validate the reasonableness of these results as they are generally less reliable than estimates at lower percentiles. - Most reserving software packages that include stochastic methods will include a range of numerical and graphical analyses to assist with the validation of the results. ## Examples of Validating Stochastic Results - Reconciliation of stochastic results with deterministic results. - Graphical review of results. - High-level reasonableness checks of numerical diagnostics. - Comparison of results against benchmarks. - Back testing of results. - Applying stress and scenario tests. ## Alternative Approaches - Bayesian method - Other methods ## The Bayesian Approach - The prior distribution (which captures our beliefs based on what we know of the exposure) is combined with the likelihood (which reflects the probabilities of the future claims development deduced from the past claims data) to produce a posterior distribution. - The posterior distribution reflects the probabilities of the future claims development deduced from both the past claims data and our beliefs based on what we know of the exposure. - **The relationship is:** - Posterior distribution = Prior distribution x Likelihood. - A Bayesian approach can also be used for stochastic reserving. - Under the Bayesian theory framework, the prior distribution of the model parameters is first chosen based on judgement or experience. - Then the posterior distribution of the parameters variable is calculated using Bayes' Formula. - The choice of prior distribution depends on many factors including the way in which the model is parameterised. - The parameter being considered may or may not have a natural interpretation. - In addition to the choice of prior distribution, the data used to parameterise and the choice of model will influence the resulting posterior distribution. - By combining the prior distribution, data and model choice, the posterior distribution will contain more information than the underlying prior distribution. - Using simulation-based techniques such as the Markov Chain Monte Carlo (MCMC), a simulated distribution of parameters can be obtained. - This approach is an alternative to bootstrapping to obtain the distribution of parameters (ie, parameter uncertainty). - The process variance still needs to be incorporated, which is done at the forecasting stage by simulating from the process distribution conditional on the parameters. ## Bayesian Advantages - Provides a complete predictive distribution of the ultimate reserve. - For the other methods, even if the variance can be calculated, the closed form distribution is not available (bootstrapping can provide the parameters to simulate from). - It explicitly shows the impact of judgements, which is reflected in the prior distribution. - For other methods, these judgements

Use Quizgecko on...
Browser
Browser