John Tedeschi, ACAS, MAAA, Managing Director and Chief of Catastrophe Modeling
Insurer and reinsurer reliance on catastrophe models has become part of the fabric of risk management. Though they provide guidance rather than specific courses of action, these tools help carriers quantify risk and deploy their capital as effectively as possible. But, every catastrophe model has specific strengths and weaknesses, which is why risk-bearers tend to use several of them to evaluate exposures. The final decisions on whether to cover a particular risk are shaped by loss history, company objectives, and risk manager judgment. As a result, models are crucial to (re)insurer success … as long as they are used properly.
The losses from catastrophic events can have a dramatic impact on property risk treaties — and all aspects of the events affect the entire chain of primary pricing (and coverage) for both property risk and catastrophe treaties. Understanding the interplay among the issues that can drive insured losses and how the catastrophe modeling firms address them can materially affect the structure of a reinsurance program.
Determining projected financial implications tends to be the simplest aspect of a catastrophe model, but it can reveal some of the weaknesses in the hazard and vulnerability analyses. To make the financial component more powerful, modeler expertise on a region or peril needs to be supplemented with an intricate understanding of the insurance and reinsurance structures involved, which is usually provided by the reinsurance broker. Thus, hazard, vulnerability, and financial factors converge — via model analysis, broker insights, and a cedent’s knowledge of the risks it covers. When this happens, coverage is optimized to help risk-bearers maximize their possible returns on capital deployed.
It took many months for the reinsurance industry to digest the effects of Hurricane Ike and its impact continues. One of the largest natural catastrophes in history in terms of insured losses, the Gulf of Mexico storm was in part overshadowed by the simultaneous financial catastrophe that struck business centers around the world. The implications of both emerged at the same time, as risk managers sought to keep pace with damage to both the asset and liability sides of the balance sheet. So, how did the major catastrophe models miss the possibility of an Ike-strength hurricane?
In the hunt for answers, many reinsurers began to question whether their exposure data was of high enough quality. Several potential flaws were identified, from modeling firm data to insurer-supplied data to insufficient imagination in identifying key risks and exploring scenarios. There was no single cause of the vast change in loss estimates in the half-year following Hurricane Ike, and the (re)insurance industry remains focused on applying remedies to all areas of risk and capital management. The use of models, however, has attracted particular attention, as there is plenty of room for improving how they are applied to risk identification and evaluation — and ultimately to reinsurance decision-making.
Among the gaps in model usage was the absence of customized approaches to risk-bearers’ portfolios. Typically, each modeling firm uses a standard approach — a baseline, effectively — that constitutes an excellent starting point. What was missing last year, though, was a deeper look at each cedent’s situation in order to gauge the reliability and applicability of exposure and claims data; measurement against company risk appetites, tolerances, and profiles; and the development of solutions that accurately reflect a carrier’s strategy as a whole.
Instead of focusing solely on one model, for example, reinsurers may need to broaden their scopes to include different data sources, varied models, and a broader evaluation of the factors that could contribute to outsized insured losses. Models are tools to be considered within a larger field of risk and capital management capabilities, and over-reliance can cause unexpected losses, the reverberations of which could reach all the way to market capitalization.
Historically, reinsurers have relied on past events to plan for the future: loss history is an important source of risk management information. Unfortunately, there tends to be limited experience in certain domains. Emerging threats, developing markets, and climate change, for example, require forecasting without the benefit of past results. Catastrophe models create information where none would exist otherwise to enable analysis and effective risk management decision-making. These same techniques also enhance the management of established risks.
The assumption and transfer of risk is not isolated. Catastrophe models help carriers bridge the gap between events and financial performance by relating risk to premium. A successful portfolio generates sufficient premium revenue to offset losses and expenses while still delivering returns to meet company revenue, earnings, and ROE targets. Catastrophe models only reach part-way across the risk/return chasm, though. The rest comes from supplemental tools and expertise from internal and proprietary models, reinsurance brokers, and risk management specialists to ascertain the impacts of rate levels and disaster scenarios on company financials. Guy Carpenter’s i-aXs® platform, for example, integrates with third-party catastrophe models to provide satellite imagery, real-time data feeds, and risk assessment information to add timely insights to underwriting decisions.
With the uncertainty that pervades catastrophe risk management, no single tool is enough. Instead, reinsurers should gather as much relevant information as possible, using all the resources at their disposal. Diversifying model usage across the three major vendors mitigates the risk that useful insights will be missed, helping carriers to maximize their returns and increase shareholder value. Additionally, the modeling effort should reach beyond the catastrophe modeling firms. Tools and techniques from other sources should be included in the process — such as scenario-based investigation including claims department input, reinsurance broker-conducted analyses, and proprietary catastrophe modeling solutions.
By linking a portfolio’s likelihood of success to the results of only one catastrophe model, a (re)insurer assumes a considerable amount of risk – unnecessarily. There are three major catastrophe model vendors for a reason: each brings a set of strengths to the market. Further, the additional tools and insights available within cedent organizations or from their reinsurance brokers can yield even more relevant information to facilitate effective risk transfer. Casting a wide net can result in a considerable performance upside.
Creating New Ideas
Catastrophe models may be invaluable, but they are not all-encompassing. The vendors tend to focus on what can be known — and quantified — which leaves some parts of (re)insurer portfolios unaddressed. Depending on the nature of these gaps, carrier exposures can remain quite high, diluting substantially the effects of any risk management planning and execution. To alleviate this concern, reinsurance brokers have extended catastrophe model capabilities with a variety of tools to help clients limit their exposures further. Guy Carpenter has developed several model enhancements – such as GC LiveCatTM and GC ForeCatTM, available through the i-aXs platform – to facilitate risk identification and capital management beyond the capabilities of the major catastrophe modeling firms.
A portion of most reinsurers’ portfolios goes unmodeled, of necessity. Some exposures are not addressed by catastrophe models, including automobiles, marine risks, and fine art. The data and tools simply do not exist for these risks, limiting carriers’ abilities to plan and take informed action. Further, some models do not include all the risks associated with a particular event. If regulators change policy terms and conditions after an event, for example, insured losses may be higher than expected.
Other expenses that reinsurers could incur may also be overlooked by the major catastrophe models. Loss adjustment expenses (e.g., the cost to settle claims) and assessments from organizations that carriers are obligated to support (such as fair plans and wind pools) can erode margins beyond expectations, ultimately impairing capital positions. This is also the case for differences in claims practices.
Of course, reinsurance broker relationships in general are designed either to minimize the effects of these costs or account for them in risk management efforts. But, additional modeling tools have been developed to integrate these issues into the entire process of evaluating alternatives, from the earliest stages of exploration to the execution of specific structures. Guy Carpenter’s GC LiveCat and GC ForeCat tools (developed in conjunction with WSI Corporation), for example, open a range of choices to carriers that would not exist using the vendors’ models alone.
GC ForeCat offers a detailed monthly preseason windstorm forecast from December through April, in order to help property-catastrophe insurers gauge the threats to their portfolios in advance of the Gulf of Mexico hurricane season. Once the season begins, GC LiveCat tracks live hurricanes (with uncertainty estimates), enabling realtime risk management from the time a storm forms through its dissipation. Using technology from WSI, GC LiveCat extends the track and intensity forecast time horizon to 10 days – double the five days offered by the National Hurricane Center. As a result, insurers can take immediate action on their portfolios to minimize losses and protect their capital.
Further, refined analytics around catastrophe events can hone a (re)insurer’s risk management practices, ultimately leading to more cost-effective cover and improved financial performance. Being able to mitigate losses by laying off some catastrophe risk via livecat cover, for example, has an effect on margins, earnings, and market capitalizations (where the effects of earnings and margins are magnified). These benefits do not come from the use of third-party catastrophe models alone. Rather, they are secured by extending existing features with tools developed by reinsurance brokers who are closely engaged with the overall market for risk-transfer.
Catastrophe models provide a starting point for generating the full set of analytics necessary to optimize the deployment of capital, attain clearly stated financial targets, and realize company growth. Reinsurers thus need to look beyond the core catastrophe models to understand the full natures of the risks they cover, as well as the possible effects to their balance sheets. “Extended” catastrophe models, frankly, extend the financial upside.
Major Changes to Models
The most significant changes to models for the remainder of 2009 are expected from AIR and RMS, for their earthquake models across various regions and perils, with the focus clearly on the United States, Canada, and Latin America earthquake models.
The primary catalyst behind the model updates for the U.S. and Canada earthquake is the publication of the 2008 National Seismic Hazard Mapping Project (NSHMP) results by the U.S. Geological Survey. This is a major study that is updated every five to six years and used as a benchmark for earthquake hazards in the United States. The greatest contributor to the changes in results is the new science behind the 2008 NSHMP, particularly the Next Generation Attenuation ground motion relationships used for shallow earthquakes in the western half of the continent. These new findings are the largest drivers of changes to results in California and the western United States, which include locally significant decreases in losses.
Click here to download the report >>
Click here to register for e-mail updates >>
John Tedeschi, ACAS, MAAA, Managing Director and Chief of Catastrophe Modeling