November 28th, 2012

Taking Control of Quantifying Your Natural Catastrophe Risk: Part I

Posted at 1:00 AM ET

Elizabeth Cleary, Managing Director, Valerie Kloepfer, Managing Director, Imelda Powers, Ph.D., Global Chief Cat Modeler, Sherry Thomas, Head of Catastrophe Management - Americas and James Waller, Ph.D., Research Meteorologist

Amidst the fast pace and frequent trends and changes in the market, a single business conversation can stand out. With the passage of time, one sees that it was a precursor to what would become a consistently held view - a sort of drumbeat of the times.

This brings to mind a recent conversation with a catastrophe model user. In early 2012 he reported that he didn’t want brokers to build models if there was a commercial model option. His rationale being that the research and development investment was too steep, and it is unlikely a broker would staff that effort to outperform the commercial model vendors. He said, too, with which some could disagree, that he believes there is sufficient competition among the few vendors in the space. This is particularly true after 2011, he added. What he seeks is help in understanding if the existing models “are any good” and what specifically is good, justifiable and reasonable, and in which areas they could use improvement.

In the era of “owning your view of risk” it is no surprise that firms want more information about the commercial models to help them establish their own opinions; otherwise, if shown multiple answers, no matter the detail associated with those answers, how could they possibly make an informed decision? While a firm can blend model answers to reduce model bias and reduce model change uncertainty, the missing link is a well founded understanding for how much emphasis to place on one model versus another, together with documentation and reasoning in order to communicate it.

It is Guy Carpenter’s responsibility as a reinsurance broker to lead the industry in probing all components of each model through direct questioning of modeling vendors, model testing and independent third-party validation of model components. While much of Guy Carpenter’s model validation work in 2011 centered around some specific observations related to the U.S. hurricane hazard, namely landfall frequency and storm surge, that work simply scratched the surface. Of the three building blocks in a catastrophe model - hazard, vulnerability and the financial module - most model investigation work has historically centered on frequency and vulnerability - but not so anymore.

While the full range of model validation work from last year is too large to document in a single paper, there are several analyses that provide helpful examples to a wide range of insurers, some of which are detailed below and framed in terms of the client questions which prompted them.

Due to licensing restrictions, communicating our model comparison findings in a broad paper such as this prohibits specific use of the individual model vendor names. Herein, we will refer to three existing commercially available models as Alpha, Beta and Gamma. We are able to share model-specific findings with individual re(insurers) in more detail through individual conversations, allowing us to also discuss specifically how those findings apply to clients and their portfolios.

Why Are Hurricane Deductible Impacts So Different Among ALPHA, BETA AND GAMMA?

It is generally expected that wind/hurricane deductibles would be more meaningful (as a proportion of the total loss) for less severe events and lower loss return periods. They would, therefore, be less noticeable for more severe events or higher loss return periods where one would expect deductibles to be exhausted and represent a smaller proportion of the total loss. This is a complex problem, with different implications for coastal properties versus inland properties. A few of our observations of wind/hurricane deductible behavior in ALPHA, BETA, and GAMMA are summarized below.

  • ALPHA and BETA often produce losses further inland than does GAMMA. The larger footprints of ALPHA and BETA produce losses in additional affected areas, but those tend to be lower losses and therefore have greater deductible impact.
  • Compared to ALPHA and GAMMA, BETA has more low intensity events in certain regions. These events generate lower losses, yielding an increased deductible credit in BETA compared to ALPHA and GAMMA.
  • BETA has lower loss uncertainty in some regions, thus deductible impacts are greater in those regions. This partially explains the similarity of the shape of the deductible credit curves to that computed deterministically, as shown in the graph below.

Figure 1


  • GAMMA often has steeper damage curves and larger coefficient of variations (indicated by the spread around the mean damage ratio). Hence, GAMMA tends to have the lowest deductible impact. Some of the first changes GAMMA is making in the Next Generation Platform relate to the financial engines. The simulation-based platform has the potential to yield a higher deductible impact.
  • The sample graph in Figure 1 shows that the ALPHA mean deductible impact is not always decreasing as the mean damage increases. This can be explained by the fact that at the very low mean damage ratios, a location may have a large probability of no loss, and a small probability of total loss. The former generates no deductible impact, and the latter generates very little deductible. This extreme diminishes as the mean damage ratio increases and as with the other models, the larger damage ratios then dominate and the deductible impact decreases. The other two models have continuous damage functions with no positive weight on zero or total losses.

Click here to read Part II of “Taking Control of Quantifying Your Natural Catastrophe Risk” >>

Click here to register to receive e-mail updates >>

AddThis Feed Button
Bookmark and Share

Related Posts