## Managing Catastrophe Model Uncertainty, Issues and Challenges: Part III, Using Cat Models

John Major, Director of Actuarial Research, GC Analytics®

Contact

Scenario analysis has a long history in risk management. By examining a set of hypothetical extreme events and asking “what if this were to happen?” management can begin to get a sense of vulnerabilities in the business. But it is hard to assess how realistic a particular scenario might be. Using historical events as the basis for scenarios incorporates the fact that those events did, in fact, occur. They are realistic by definition. And their relative occurrence over time gives a sense of probability.

A cat model can be viewed as an extension of the historical event scenario method. The history of events is smoothed and interpolated (and, to some extent, extrapolated) to create a large number of might-have-been events of comparable realism.

Exposure mapping and accumulation analysis tools are also widely used for assessing concentrations of risk. These have the advantage of dealing with concrete, unequivocal reality. But it is difficult to compare the risk implications of exposure concentrations in different geographic areas or across different building types.

A cat model can be viewed as an extension of exposure mapping as well. But it adds to that picture a model of the other factors that come together to produce losses.

Models also serve as tools for communication. By providing a benchmark for risk, they enable market participants to “take a position” relative to that risk. This is especially important in securitization.

The fact that cat models have a degree of uncertainty around them is not a defect of the models - it is an inescapable aspect of reality. Uncertainty affects all risk assessment tools. The challenge is to recognize that uncertainty and cope with it.

There are several avenues to coping with uncertainty in cat models. One is to utilize a multi-model approach. Others include making decisions that are robust with respect to uncertainty, and formally recognizing uncertainty as an element of risk - model risk - within the firm’s ERM framework.

In every case, users need to remember that the catastrophe model isn’t a magic black box that will generate a definitive answer. Cat models are one type of tool and take their place alongside several others.

**Reducing Uncertainty With a Multi-Model Approach**

We advocate, in most (if not all) cases, the use of multiple models when assessing catastrophe risk. By examining the previously discussed four sources of uncertainty one by one, we can see how a multi-model approach can help reduce uncertainty.

**Sampling error from limited data:** More data means less uncertainty, but there is a limited amount of data in the historical record. While different models are founded on this common data, they are not founded on identical data. They have different interpretations of the historical record, different interpretations of detailed scientific data (for example, wind fields or earthquake propagation), different sources of vulnerability (damageability) data and different data on site conditions. Using multiple models effectively increases the amount of data bearing on the analysis.

**Model specification error from choice of mathematical forms and assumptions:** Different vendors use different methodologies in developing their models. They have different estimation, fitting and smoothing techniques as well as different representations of scientific details. Using multiple models “diversifies” the risk of error from these choices by allowing for independent errors to cancel each other.

**Nonsampling error from missing influential factors and numerical error from simulation procedures:** Model vendors differ somewhat in the factors they consider in calculating cat risk. This can be seen, for example, in how they utilize the various climate cycles in developing “near term” versus “long term” frequency assumptions. Using multiple models diversifies these sources of error to some degree.

While the use of multiple models does not address the fundamental issue of uncertainty identified in Dr. Miller’s seminal paper (because it is based on the historical data common to all modelers), it does diversify many other sources of uncertainty and error identified in *Uncertainty in Catastrophe Models.*

There are several ways to utilize multiple models. The simplest approaches involve some sort of “blending.” More complex forms are “morphing” and “fusion.”

**Model blending** can involve model outputs or simulated scenarios.

Two exceedance probability (EP) curves can be averaged either by averaging dollars across common return periods, or averaging probabilities or return periods across common thresholds. Averaging can be a straight 50 percent - 50 percent mix, or some other weighting if there is reason to apply it, and be done on an arithmetic (additive) or geometric (multiplicative) basis.

If one has access to the underlying simulated loss scenarios, then a composite EP curve can be developed by combining scenarios from two models. The frequencies of the scenarios need to be re-weighted so that the overall combined scenario set represents the original total frequency.

Figure 4 shows a simple example of model blending. Two EP curves were combined by taking the arithmetic mean of probabilities over a set of dollar intervals. The figure shows the uncertainty “funnels” for each individual curve, and the corresponding funnel for the combined curve. The combined funnel is a bit narrower than the others due to the diversification effects discussed previously.

The decision of how to apply weights and which weights to apply across multi-model opinions is complex. It requires, first, a deep understanding of the individual models and their relative adequacy to the risk/region being assessed, which can then inform which model results to over or underweight in that assessment. The specific methodology for optimally determining and applying the weights is still unresolved. This is an area of ongoing focus for Guy Carpenter’s research team and we will be publishing our findings as they are established.

**Model morphing** means changing shape. In model morphing, one wants to change some aspect of model A to resemble that of model B, typically, because of implementation difficulties in using both models.

Morphing can be done on a frequency or severity basis. As an example, model B might have more credible estimates of landfall frequency in a particular region. Model A’s frequencies for relevant events can be multiplied by some factor to make the total frequency in the region equal to that of model B.

Morphing severity, however, is more challenging. There, an entire severity curve - not just one number - must be reshaped.

It is incumbent on the user, at the very least, to delve into the model to determine the extent to which the model adequately reflects the underwriting and claims handling practices of the firm. Suggestions to gain comfort include: post event analysis of actual claims to modeled losses, a review of vulnerability sensitivity with claims or loss control staff and analysis of the spatial loss gradient. It is not uncommon for certain models to perform better for certain perils, regions or classes of risk.

**Model fusion** is akin to building one’s own model. Cat models consist of multiple modules. The hazard or science module expresses the probabilistic occurrence of events. There are multiple components for different perils and for different aspects of the same peril. The vulnerability or engineering module translates events into damages to insured properties. There are multiple components for the various types of properties.

If one has access to model components they can be used as modules in a customized framework. Typically, this is not possible because of licensing restrictions. However, it often can be possible to achieve the effect of this, in some ways, by recombining model outputs. For example, results for different lines of business or different property types from different models can be combined. Naturally, the complexities of coordinating multiple model outputs make this very challenging.

It is important to remember what can and cannot be achieved by using multiple models. It cannot overcome the limitations of historical data, which is the primary driver of uncertainty. Nor can it overcome data errors. Using multiple models can, however, “diversify” and therefore reduce the other types of errors. An additional benefit is its tendency to smooth out the disruptive impact of model changes.

**Incorporating “The Uncertainty Factor”**

Another important element in successfully coping with uncertainty is to craft decisions that are “robust” with respect to model error. Sensitivity or “what-if” testing and other techniques can be used to determine, for example, how well a particular reinsurance program or underwriting initiative would appear if the model results were different. Statistical techniques are required to determine how far it is reasonable to “bend” the model results in this assessment.

A third key element is to recognize uncertainty formally within the company’s ERM process. At the very least, this means managing the expectations of the consumers of model output: management, external stakeholders or rating agencies. It also means managing their understanding of the model results - whether derived from a single or multi-model approach. The firm’s understanding of its absolute level of risk transcends catastrophe risk. It is the proper subject of ERM, and encompasses financial, operational and strategic risks as well as catastrophe risks. Many of these other types of risks cannot be quantified even to within cat modeling’s order of magnitude uncertainty. Indeed, some are hardly subject to quantification at all.

In particular, the ERM framework should be cognizant of modeling uncertainty and the prospects for model change, namely model risk.

External stakeholders also will absorb cat model uncertainties and changes better if they see it as embedded in an ERM framework. The more comprehensive and structured the ERM framework - and the more it is actually used in the firm’s operations - the more will model changes be seen as routine “information updates” that occur regularly to the firm’s understanding of its risk.