John Major, Director of Actuarial Research, GC Analytics®
Uncertainty is ever present in the insurance business, and despite relentless enhancements in data gathering and processing power, it is still a large factor in risk modeling and assessment. This realization, driven home by model changes and recent unexpected natural catastrophes, can be disconcerting – even frightening – to industry participants. But companies that understand the vagaries of model uncertainty and take a disciplined, holistic approach to managing the catastrophe modeling process are well positioned to adapt and outperform the competition.
In this report, presented this week on GC Capital Ideas, we examine effective modeling management as it relates to property catastrophe models used by primary writers of property insurance. We advocate adopting a multiple catastrophe-model approach to better estimate risk and control uncertainty. A broader discussion follows, suggesting how the industry should incorporate model uncertainty in its consideration of catastrophe risk.
Since the introduction of the first commercially available catastrophe (cat) models in the late 1980s, models have evolved, driven by improved science and the knowledge acquired from more recent catastrophes. Today, there are several major commercial vendors of modeling services, and virtually every insurer or reinsurer uses some model – their own, a vendor’s, or, in many cases, more than one.
Despite considerable refinement of the models over the decades, uncertainty remains – and it is a significantly bigger factor than many users may recognize. In 1999, Guy Carpenter & Company published estimates of the amount of uncertainty in U.S. hurricane risk models. The conclusion: a two standard error interval (a plausible range that has a 68 percent chance of including the true, but unknown, value) for a national writer’s 100-year or higher probable maximum loss (PML) goes from 50 percent to 230 percent of the PML estimate produced by the model.
The “uncertainty band” around a typical PML curve paints a more realistic – and much less precise – picture of catastrophe model output.
Advances within the modeling industry since 1999 have indeed reduced the width of the uncertainty band, but the consideration of smaller areas of geography only introduces additional uncertainty. Today, we may crudely estimate confidence intervals as:
While models have considerable uncertainty associated with them, they are still valuable tools, taking their place with scenario analysis and exposure accumulation studies. In fact, they can be viewed as extensions of both of these types of analyses.
Coping successfully with cat model uncertainty involves a number of approaches. In many cases, multiple models can be engaged to help narrow the uncertainty band. Multi-model techniques include “blending” (averaging) the model outputs, “morphing” one model’s output to reflect the characteristics of another model’s, or “fusing” model components (or at least outputs) into what is in effect a new model. In each case the correct selection of specific weight parameters and methodologies is critically important and needs to be informed by the adequacies and shortcomings of each model.
In addition to helping to reduce some (but not all) sources of uncertainty, a multiple model approach can also help smooth out the impact of individual model changes – which seem to have an increasingly acute effect on the industry.
More broadly, we encourage companies to embed awareness of model uncertainty into their overall enterprise risk management (ERM) process, and make catastrophe-risk-oriented decisions with a conscious eye towards the possibility of model error.
The issues of model uncertainty and change pose many difficult challenges for the industry. The “black box” should no longer be left to make the decisions. Rather, it should be considered a tool to help inform decisions made by (human) professionals. This is an intuitive and straightforward prescription, but making it happen will require the consideration and engagement of virtually every group in the industry.
- Modeling firms need to step up and lead the discussion about uncertainty despite the apparent competitive disadvantage of transparency.
- Primary writers need to be smarter consumers of models and model output, curtailing the blind application of “portfolio optimization” in favor of a broader ERM-based multi-model approach. They also need to rethink their attitudes about nontraditional risk transfer products.
- Reinsurers, already sophisticated model users, should not take advantage of information asymmetry, but rather explore which new products might make sense.
- Rating agencies and solvency regulators need to equally investigate the models to determine when a model is being appropriately utilized. They need to understand that “the map is not the territory” – model output is relative information, not absolute gospel, and firms need time to absorb and act upon this information when model changes occur.
- Boards of directors, investors and stock analysts need to understand cat risk in the same terms – being estimated with significant uncertainty – as other financial risks. Insureds and the public need to understand that no one really knows the right answer.
- Brokers, finally, need to stay out in front to facilitate education, communication and fair dealing.