March 10th, 2009

Known Unknowns

Posted at 1:00 AM ET

Ryan Ogaard, Global Head of Instrat®

The “black swan” is trumpeting! Last year, we saw the first-hand effects of random, unforeseen, and massive events. Catastrophe models — the tools we use to forecast disaster and protect capital — were shown to be quite fallible, leaving balance sheets exposed to more risk than carriers realized. Yet, maybe we’ve been a bit hasty in meting out blame. Catastrophe models have made great strides since they were first introduced, and our industry must continue to use them for a reason. What has emerged is an essential tension between the unknown and efforts to counteract it.

Nassim Nicholas Talib is the father of the black swan concept. Introduced in a book bearing the phenomenon’s name, black swans represent the types of event that models simply cannot anticipate. This idea has received extra attention from insurers and reinsurers over the past few months. Many feel that models have let them down, particularly with Hurricane Ike. The storm produced insured losses for many companies that are multiples of catastrophe model estimates. Whether the event was large enough to meet Talib’s black swan standard is up to him. But as we approach USD20 billion [as of this writing] in insured losses for Ike, the reality for our industry couldn’t be clearer.

Hurricane Ike’s unexpected severity was not an isolated incident. Rather, it is the latest instance in a growing chasm between modeled forecasts and actual losses. Over the past few years, several storms have exceeded estimates by wide margins, causing consternation among risk bearers. But, what other choices are available? For now, carriers can choose to use models or abandon them. Unsurprisingly, most will continue with the former. Though flawed, these tools do create value — mostly by forcing insurers and reinsurers to look at risk systematically.

In the past, insured losses were estimated by drawing concentric circles on a map. Each circle represented (very roughly) bands of varying wind speeds within which we could estimate exposure concentration. Multiply exposure by some “destruction factor,” and a loss estimate emerges. This was a simple, yet not ridiculous approach to the problem. Unfortunately, most of the parameters needed to make this method reasonably credible — such as circle size, wind speed by location, and expected damage for a given class of business — were lacking. The industry needed catastrophe models to address these concerns.

In addition to improving the measurement of risk, there is another reason why (re)insurers continue to use catastrophe models: everyone does. Largely due to rating agency pressure, the models have become an industry standard, and so the market is influenced heavily by their results. This leads buyers and sellers of risk to reference them as common points of understanding. Consequently, they offer a de facto data standard among risk traders. The contents, geographic resolution, and format of exposure data are now common throughout the industry, improving the overall awareness of risk and the transparency of risk trades tremendously. Better information has helped change underwriting guidelines, set exposure limits, and push prices closer to the true cost of risk. Even if the models are “wrong,” they have decreased the likelihood of a black swan.

Ironically, for all the utility of models, we never really believed them anyway … which is why we will continue to use them. Professionals who are familiar with the models have long understood that they are not built to estimate the losses of a single event. Catastrophe models are meant to look at long-term probabilities of loss to a portfolio. An individual storm, earthquake, or tornado, for example, has nuances that a simplified mathematical proxy will not anticipate. Can we find a black swan in this shortcoming? That is a possibility that model users must keep at the front of their minds.

Hurricane Katrina might be called a black swan, as it triggered a series of knock-on effects — e.g., the failure of the levees in New Orleans, long-term evacuation and legal challenges — that pushed losses far beyond standard estimates for a hurricane of Katrina’s magnitude. Hurricane Ike began as a hurricane event, as well. But, as it traveled inland, it wed with another weather event and became something more than a hurricane, with a longer life, path, and swath of destruction. Models only address scenarios within a given scope of activity. It might seem painfully obvious to state that a hurricane model’s scope is hurricanes, but we forget this when we expect hurricane models to deal with broken levees and litigation.

Catastrophe models will continue to play an important role in risk analysis, but they are not unscathed by recent performance. Unforeseen chains of events are not the only sources of model error — models’ inner workings do have room for improvement. Even with theoretically flawless technology, though, black swans swim by from time to time, and by definition, they will evade model detection. With this in mind, catastrophe models have already served the industry well — they have helped us learn about our lack of knowledge. Perhaps even Mr. Talib would agree: that’s real progress.

Originally published in Reinsurance magazine

AddThis Feed Button
Bookmark and Share


Related Posts