December 20th, 2011

Industry Good Practice for Catastophe Modeling & Solvency II – A Perfect Opportunity for Review: Part II, Operational Principles and Technical Considerations

Posted at 1:00 AM ET

Section 2 - Operational Principles

The message that data quality is vital to catastrophe modeling comes through loud and clear in the opening of Section 2. The old adage “garbage in, garbage out” is much abused, but in catastrophe modeling it has certainly earned a place. Companies should be cognizant of the impact that data manipulation has on the results produced by models. As a result, sensitivity testing should become much more prominent. Data should be tested for accuracy, completeness and appropriateness, along with a wide range of assessments employed in terms of spatial, temporal and thematic qualities. Missing and incorrect data should be accounted for through appropriate “grossing up” techniques, which should be documented in a formal data policy. Much of this will be second nature to firms that have used catastrophe models for any period of time, but we believe there will be a few “root and branch” reviews of systems and data capture processes for companies that have recognized data issues.

Arguably, the most helpful material in the report consists of guidance concerning the validation and selection of models for use in the capital modeling. It is not best practice to cherry-pick models based on a commercially desirable result, and choices need to be justifiable and clearly documented. When selecting a particular model for use in estimating exposure, a company should satisfy itself of the adequacy of the model not just for the particular peril and territory concerned but also for the particular portfolio as a whole. Catastrophe models, at least in part, will be calibrated on industry data and may not be reflective of the specific risk characteristics of that company. Hence, gaining an understanding of these situations is crucial. The ability of a company to do this, of course, will be impacted by the support and transparency of the model developer. And again, we refer back to the issue of vendor model documentation.

The need to deal with issues around model change is timely, given the recent and sometimes heated debate in Europe over the new RMS v11 European Windstorm model. Companies should be aware of model revisions in advance and try to prepare for them as best they can, based on available information. There is nothing to suggest that a new model version should be adopted automatically. The validation process for the new model should be completed in the normal course of business if the change is material enough to impact a company’s risk profile. Should the new version fail the validation test, there is nothing to stop a (re)insurer from continuing to use the previous version indefinitely - as long as it can be justified. This could mean that vendor model companies and reinsurance brokers may have to support out-of-date versions for much longer than would have been the case previously.

The effects of various options and settings available to companies running models need to be understood in the context of the effects they have on the model output. Any recommendations made by vendor model companies should be considered, but they do not necessarily need to be followed if there is good evidence to support a divergence of opinion.

Companies face significant challenges in validating vendor models because they are proprietary. When possible, they should ensure that they are confident in the validation work undertaken by the vendor companies themselves and are aware of any independent validation. The report encourages vendor companies to provide information to licensees to facilitate this process but acknowledges that they are not obligated to do so. Access for non-licensees is likely to be even more limited. These frustrations aside, there are a number of techniques companies can use to support the validation process including:

  • Comparison with industry or their own experience data for low return periods.
  • Basic hazard map assessments and performance against historical events.
  • Comparing the results from different models and sense-checking year-on-year movements.

Companies should always try to identify any model biases or un-modeled perils and then adopt a suitable approach to reflect the presence of these phenomena.

Section 3 - Technical Considerations

Companies have been pondering the merits of single-model versus multi-model approaches to assessing catastrophe risk. Is it more sensible to gain a thorough understanding of just one model, perhaps employing some tailoring adjustments, or to run all the available models and form a bespoke view of risk possibly based on a blended approach? The first approach leaves the company open to model revisions having a potentially dramatic effect on their required capital position, and the second, while providing a more stable view, may risk a disconnect between the internal view of catastrophe risk and that used by reinsurers for pricing catastrophe protection. There are many variations on these positions, and the report provides some practical guidance on choosing an appropriate course of action, as well as some examples of typical approaches.

The subject of uncertainty within catastrophe models could be deserving of a dedicated paper but is covered in summary in the final section of the report. The key message here is that focus should not be devoted to a single point on an EP curve, as this encourages optimization of portfolios around what may be a weakness in the model. A large amount of this section is devoted to drawing the reader’s attention to the different types and sources of uncertainty that may be present within catastrophe models. A company should show awareness of these uncertainties and any potential biases and ensure clear communication of the implications to decision makers in the organization. There is limited advice here on how to practically manage modeling uncertainty and really embed this mindset within an enterprise risk management (ERM) framework. There is often a temptation to try and ‘model’ model uncertainty but this has obvious drawbacks in terms of potentially compounding the problem you are trying to solve. Judicious use of multiple models, visualizing estimated uncertainty through error bars and using different metrics such as TVaR and Lloyd’s RDS scenarios can all be informative exercises.

Click here to register to receive e-mail updates >>

AddThis Feed Button
Bookmark and Share


Related Posts