Claims and the Cat Model

Recent events like the Japan earthquake have provided a wealth of information for catastrophe modeling as its importance for claims grows. We dive in deeper with EQECAT’s Bill Keogh.

June 02, 2011 Photo
Generally when one thinks of catastrophe modeling, they think of the risk side. But catastrophe modeling can be used both for planning and for real-time help immediately preceding an event and after it occurs. By compiling data on insured property size, construction, location and potential catastrophic events affecting it, then running the many millions of data combinations through a computerized model, a set of scenarios for possible losses can be produced. Those analytic results, if based on good data and processed through an effective model, allow an insurer to adequately prepare its claims response and alternatives in advance of the event.

This is particularly applicable in storm events. Bill Keogh, president of catastrophe modeling firm EQECAT, gives an example: "Let's say you have a big storm that's out at sea. What we do is start telling our clients, at this stage, in this event which is still far offshore, here are a hundred events that could be potentially what this looks like when it makes landfall. That way they can start planning."

The process of staging resources can begin well before a predicted event if enough is known in advance. As the event progresses, scenarios are refined, and instead of a hundred events, projections can be whittled down to five or six. "Each of those events that we have in our model will be close to what the actual event looks like, and each has this rich amount of data behind it—which is all of the properties that might be affected by the storm and how damaged they might be. It allows the carrier to prioritize claims resources," says Keogh.

A good model allows an insurer to deploy the right types of adjusters to a scene, to pre-negotiate with suppliers and backups that might be needed, to set up rollover calls and CSRs in remote locations, etc. A carrier might still find itself short-staffed when it comes to commercial vs. homeowner adjusters or building vs. auto specialists, but in general, post-event glitches should be minimized. Additionally, a model can easily account for spikes in demand of supplies, restoration services, and contractors, among others, according to Keogh. That kind of analysis could be a ready benchmark for adjusters trying to quickly process an unusually high number of otherwise standard claims.

Tailoring the Fit
The traditional way of integrating catastrophe modeling into a claims operation has been by focusing on historical data, but it seems there would be a paucity of data specific to regional events. A tsunami in Sendai isn't the same as a tsunami in Miami, and an earthquake in Chile isn't the same as an earthquake in Southern California. So how can claims operations get some regional specificity for events that might happen but for which there is little or no historical data, such as the New Madrid fault or a 9.0 quake along the San Andreas fault that ripples into multiple other disasters, including nuclear contamination, fires and mudslides?

The answer lies in at least comparing apples to apples and oranges to oranges. There are lessons that can be shared between regions that have similarities; it's partly a matter of clarifying exactly what those likenesses are. For instance, the seismic activity in Chile and its fault system is very similar to what you see in California, says Keogh. "We're always analyzing earthquakes worldwide, whether they're insured events or not, because—to the extent that you have similar building stock, similar building codes, soil types, ground motion, etc.—you can infer from one region to another. We do that because, otherwise, you're just relying on engineering principles."
Claims information from anywhere is valuable in building the scenarios. It doesn't mean the buildings in Chile are exactly the same as in California, but where there is common ground, it informs the models and allows for some very important generalization across regions. There will always be some uncertainty, but that can be captured and expressed in the model. As Keogh puts it, "We don't eliminate the uncertainty. We allow people to understand that so, when claims adjusters are out in the field and looking at model results, there will always be some disparity between what the model says and what you actually see on the ground. That's not a mistake; it's just a reflection of the uncertainty of the risk that we're analyzing."

In some cases, like the March 11 disaster in Japan, there is a confluence of events that exceed the parameters of scientific modeling. The entire consensus in the scientific community was that an earthquake wouldn't top 8.4 on the Richter scale. Once an event like that happens, not only modelers but the entire scientific community begin looking at the data and trying to incorporate the lode of information into assessments and models for the future. In this case, the data are still coming in. Aftershocks from this quake will probably extend out about 12 months. In terms of models, the aftershocks are folded in as part of the overall simulation, but they are considered separate events, says Keogh.

One key in getting good results from a model is supplying good carrier data. "It would be a very good idea for claims departments to review how the insured properties are being described before they get modeled and to make sure that the construction and occupancy description are accurate. One of the surprises that happens after an event is that companies learn that they might have been seriously miscoding their insured properties," Keogh warns. Those kinds of mistakes include things like listing a metal-frame building as a steel-frame—the latter holds up quite well under certain stresses that are too much for the former.
"If you are thinking that the claims department is going to use the model right after an event to understand where the major losses are, and the inputs are wrong...well, you could end up with some very bad signals," says Keogh. He gives the example of floating casinos along the Mississippi River in the Katrina aftermath. In some cases, those were coded as hotels—as though they were buildings—in the model. Pretty obviously, they had greater vulnerability in real life than they had in the model.

Future Designs
Right now, catastrophe modeling is somewhat generic; models are based on overarching incidents that have broad applicability to regions, risks, structures or other general concerns and factors. In the future, however, there is the potential for informing cat models for an individual insurance company based on its claims experience, according to Keogh. "It might be possible in the future to actually leverage the carrier's claims and exposure data to inform how the catastrophe model performs so it would be much more about having a model that reflects their unique business. In that case, the claims side of the business would play a very important part," he says.

He thinks the 20-year-old modeling industry is still in its early days. It has evolved remarkably over that time and will continue its development, having increasing levels of involvement at the claims level. With that, carriers and adjusters can expect to see more professional opportunities for claims personnel in the modeling niche. That may mean that insurers should start looking at training opportunities or cooperative ventures with cat modelers, or that carriers should start wooing some experts from the modeling side of the industry to build a stable of expertise in house.

Bill Keogh has served as president of EQECAT, Inc. since November 2010 and brings nearly 30 years of experience in the insurance and reinsurance industry with a focus on the property and casualty market. He was a founding board member of the International Society of Catastrophe Managers and is a frequent guest speaker at industry events and university programs.

EQECAT, Inc. provides state-of-the-art products and services to the global property and casualty insurance, reinsurance and financial markets. EQECAT is a technical leader and innovator of catastrophe risk management models that quantify exposure to a range of natural and man-made catastrophic risks.
photo
About The Authors
Bevrlee J. Lips

Bevrlee J. Lips was managing editor of Claims Management magazine (now CLM Magazine) from January 2012 until March 2017.  blips@claimsadvisor.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages