June 6th, 2016

Reserving and Capital Setting: Sizing the Problem, Part II: Quantifying Emerging Risks; Models

Posted at 1:00 AM ET

Once the risks have been identified and ranked, the next step is how to quantify the likely impact on the financial results of the firm. The first and most obvious question is what available quantification techniques are available for each risk on the list. This will depend on the availability of relevant data and commercially produced models.

Where claims or external market data is available the likelihood is that the timespan of datasets will be limited. In this case standard actuarial reserving techniques can exacerbate the problem as there potentially will be no tail to construct a chain ladder-type exercise fully. Looking carefully for any calendar year trends in frequency or severity of claims will pay dividends, as will talking to claims professionals about the likely uncertainty surrounding individual case estimates and obtaining their views on duration to settlement.

For some risks, models may be commercially available. So what do these models typically offer? Well, in short, they might provide the sort of information expected from a catastrophe type model, such as loss amounts with associated return periods and, in some cases, a view on the correlation between lines of business. This helps with best estimate reserving by using the Average Annual Loss (AAL) as an initial loss pick and for capital modeling by using a statistical benchmark such as the 1-in-200 net of reinsurance Value-at-Risk (VaR) amount. What they do not provide in most cases is an estimate of the likely emergence pattern of that risk. This is important for casualty lines and so this is currently a large missing piece of what is a complex jigsaw puzzle.

While these models are helpful as ever, care needs to be taken in their use just “off the shelf.” They are all fairly new in their construction and so one must ask:

  • What is the data source?
  • How has the data source been used in model construction?
  • What are the key model assumptions?
  • Is expert judgment used in the model methodology or parameterization?
  • How are dependencies introduced and parameterized?
  • Is my data good enough to produce reliable results?
  • How sensitive is model output to the data input and changes in the core assumptions?

Fortunately, these should all be familiar questions as they all need to be addressed whenever using an external model. However, as these models are in their relative infancy, much more care is needed in this validation exercise. Also, it is likely new data and therefore new model versions will follow in the footsteps of the first models quickly as new information emerges. This means results can be volatile as models evolve; so a robust model change policy will be required.

Link to Part I>>

Link to Part III>>

Click here to register to receive e-mail updates >>

AddThis Feed Button
Bookmark and Share


Related Posts