March 3rd, 2009

Risk Modeling Part II: Data

Posted at 1:00 AM ET

Ryan Ogaard, Global Head of Instrat®

Often, the situations or events that define unacceptable risk have never occurred, or have occurred in far different environments than exist today. While models attempt to put all relevant information about a particular risk into a single picture, it is well known that they will never fully mimic human behavior — but that does not lessen their value. Models can be very informative if they are put into the proper context and used to produce knowledge rather than definitive answers. To understand models, decision makers must understand the information that created and feeds them — the data.

There are two primary uses for data in the context of risk modeling. First, it is a historical record of events that is used to understand risk patterns of the future. How many defaults have happened in the past? How many insurance claims? What sort of entity had a higher likelihood of loss? Data is also the representation of the current exposure base. How many loans are outstanding? How many insurance policies have been written? What are the primary qualities of these risks? The data from the past points to probability; the current data, to immediate loss potential.

These related uses of data — as a basis for model building, and as a current risk profile — are critical to effective model-based decision-making. The essential starting point, however, is the current risk profile. But this is often overlooked. Before the application of advanced math and sophisticated simulation techniques, some relatively simple data-mining exercises can paint a useful picture of the risk landscape. This landscape might be defined by exposure to a class of risk, location, type of business, or changes in the profile over time, in varied economic conditions or in relation to demographic shifts — the list is endless and variable for each company. Such analysis gives decision-makers a context in which to judge more complex modeling results and can point to further methods of investigation. A fundamental analysis of exposure data is common sense — but it has not been common practice.

Recent events — from collateralized debt obligations (CDOs) losses to hurricanes losses — have highlighted the power and importance of understanding exposure data. As a result, a new focus is emerging on this aspect of risk management. Practices that seem simple and obvious, however, can be difficult to implement. One frequent stumbling block is the overwhelming amount of data that comprises a risk profile. Compiling all this information into a coherent format, analyzing terabyte-size databases, and creating usable output from analysis can consume entire departments and require highly specialized skill sets. Fortunately, the evolution of data-handling technology has been robust, driven by online innovations and the need to do massive, ultra-fast lookups, processing, and reporting.

One example of emerging risk/data technology is Guy Carpenter’s i-aXs® platform, which melds business information software with a GIS system and supercomputers to create a specialized risk-assessment environment capable of quickly analyzing and reporting on a massive risk-profile database. Such a platform can leverage the skills of analysts and makes laborious inquiries into risk exposure more practical and information more accessible to decision-makers.

The sheer volume of data is not the only challenge. Once data is parsed and metrics are developed, flaws and gaps in the risk profile often become glaringly apparent. Indeed, a thorough risk assessment seeks to discover such discrepancies. Fortunately, a wealth of new information sources has come into existence, and databases identifying nearly every natural and man-made object in the world are for sale from specialist firms. This information can be used to augment a data set and draw a more complete picture of risk. Specialty catastrophe-modeling firms that are widely used in the P&C space, such as Risk Management Solutions, have recently developed an entirely new practice focused on data-quality assessment and enhancement. Once again, the skill and technology to harness third-party databases can be a roadblock, but platforms such as i-aXs are built to integrate data from various sources, yield a more complete and accurate data set, and create a more robust foundation for further risk assessment.

Originally published in MMC Viewpoint

Click here to have the rest of this series delivered to you by e-mail.

AddThis Feed Button
Bookmark and Share


Related Posts