Mining Big Data to Connect the Dots

How claims managers can effectively dig up and refine new, potentially precious data resources.

July 16, 2013 Photo

Big data is all the rage in the business world these days, and insurance—which lives and breathes numbers—has tentatively hopped on the bandwagon. But what exactly is “big” about the new sources of data in an insurance context, and how is it different from all of the other information that claims managers have traditionally tapped into during their investigations?

Let’s take a moment to separate the hype from the reality. Is there really a mountain of “gold” information in big data mines? And if so, how might claims managers most effectively dig up and refine these new, potentially precious data resources?

First of all, there is a lot more data out there for insurers to slice and dice—both structured (that is, in very specific and easily accessible formats) as well as unstructured (meaning raw data that must be sifted through and refined to have any real value).

Managing data volume and velocity can be problematic, like trying to quench your thirst by drinking out of a fire hose. Indeed, some carriers are having trouble correlating all the internal data they already generate, so it might be wise for insurers to make sure they have their own processing houses in order before flooding the pipeline with additional material.

But big data management should be achievable for insurers. After all, carriers have a lot more capacity at their disposal. Memory is cheaper. Processing is faster. Analytical programs are increasingly sophisticated. And outside support is available for those who don’t want to reinvent the wheel or build a new data infrastructure from scratch.

But remember that the advantage of big data is not simply having more information. Indeed, gathering data should not be an end in itself—insurers also need to be able to harness these new data streams to spur actionable outcomes, like building a dam to generate electricity that powers cities.

That could be easier said than done with big data, since carriers may have to pan through a lot of raw materials to find the few gold nuggets that could significantly impact a particular claims investigation.

Insurers have been making their living off structured data for a long time. Culling unstructured sources (which often generate big data), however, might prove to be more difficult. Consider that text from various sources can be analyzed using programs that cull through an adjuster’s notes or email exchanges. There are also ways to leverage audio recordings (such as claims conversations with call centers), as well as a multitude of images from loss sites to sift through (including not only those taken on a claimant’s smartphone, but perhaps by the growing number of surveillance cameras in public spaces as well).

There is also much more granular data being generated (for example, tracking driver behavior and perhaps other types of insurable activities in the not-so-distant future via telematic devices). And, of course, there is a multitude of social media to explore (through which claimants and witnesses might share information that either confirms—or contradicts—their claim statements).

This is all well and good. Insurers can use all the help they can get to spot and properly investigate a questionable claim. But at the same time, big data usage raises its own set of challenges.

For one, insurers should ask themselves if they are merely becoming data-heavy (meaning they just have a lot of new information but may not be quite sure what to make of it all) or, preferably, if they are growing data-rich (meaning they have new information from which they can generate deeper correlations raising red flags about potential frauds).

In their pursuit of big data, however, insurers should be careful about reaching a point of diminishing returns if they data-mine so widely and deeply that they end up drowning in information but starving for actionable insights.

Cost is another issue to consider—in terms of money, personnel, and time. If insurers are not cautious, they might end up spending more to acquire and refine big data than they save in claims expenditures.

Ease of doing business is yet another factor to contemplate. Even with data that appears to be easily accessible—by installing telematics devices in cars, for example—the question is how insurers can efficiently process and productively employ all of the information such new sources generate while still producing a positive return on their investment.

Meanwhile, do insurers have the talent in place to know where to look for big data as well as how to refine and leverage it once such sources are tapped? The role of “data scientist” is perhaps the next big profession in our increasingly data-intensive industry as well as the emergence of a chief data officer to coordinate the requisite mining and refinement efforts.

Last but not least, in the hot pursuit of big data, is enough attention being paid to regulatory and privacy-related concerns? With more people living their personal and professional lives online, a ton of useful data is out there. Yet accessing and leveraging it without violating any rules or unduly alarming privacy-sensitive customers are significant mitigating factors for insurers to consider.

This is likely to be an ongoing balancing act—weighing the dollar return on big data versus the potential reputational costs in terms of negative public reaction. Sixty-four years after George Orwell introduced the concept of Big Brother into our cultural consciousness with the publication of his classic novel 1984, people are still wary of government—or insurers—looking over their shoulders or prying into their private lives.

Privacy is a relative term these days with the proliferation of social media, but that doesn’t mean some people won’t resent having this fleeting luxury invaded—even if there might be an economic justification or payoff in terms of less fraud and a potential price discount.

These are difficult questions, but in the end, the impact big data could have on facilitating more efficient and effective claims management operations makes such information worth fighting for, in my humble opinion.

For one, big data can provide a deeper dimension to a claims investigation. By taking additional factors into account in an increasingly correlated way, insurers can produce more dynamic claims profiles. In theory, big data should make it easier to expose fraud by literally helping carriers connect the dots—that is, exposing connectivity beyond just one suspicious claim.

For example, analysis of big data sources could help insurers confirm whether the various parties involved in a claim—policyholder, injured party, service provider, etc.—are related in any way (not just by blood but by past associations in similar incidents reported elsewhere). Big data can help carriers take a closer look at relationships among witnesses, salvage/repair/renovation workers, legal representatives, medical care providers, and diagnostic facilities for evidence of a conspiracy.

This is more commonly known as link analysis—in effect, who’s who and who knows whom. You can cross-check across multiple, disparate data sources to conduct a non-obvious relationship analysis among a wide degree of casual separation. When analyzed in such a manner, fraud patterns might emerge that otherwise would be undetectable.

Having big data provide such a wide-angle view of a claim is important because organized fraud can thrive by capitalizing on the isolation of adjusters (whether working for the same carrier or for different insurers). In some cases, individual claims that appear perfectly legitimate on their own may begin to look suspicious if patterns emerge among the same players involved in unrelated incidents.

In this way, big data can be tapped to develop a more holistic view of a claim. This could be very useful because investigating claims is often akin to observing an iceberg—there’s more going on beneath the surface than what’s readily visible to the adjuster on the ground. Analysis of big data can pick up on complex connections, reveal previously unrecognized trends, and expose repeat players supporting a series of fraudulent claims across carriers and even state lines.

Before big data became such a chic topic of conversation in the industry, insurers already had been growing increasingly reliant on advanced analytics to fuel predictive models that raise red flags about potential fraud. With big data sources expanding the scope of available information exponentially, it’s a bit like putting predictive models on steroids.

That said, “big” doesn’t automatically translate into “better” when it comes to claims data. Indeed, as with any computerized product, there’s the threat of garbage in, garbage out. However, the availability of big data, along with advanced programs and predictive models to amplify it, can make for more analytically driven investigations and claims decisions.

Bottom line, big data is what insurance claims managers make of it. The emergence of big data might have appeared to be a luxury to some at the start, but it is quickly becoming something of a necessity. Ideally, it could turn out to be an embarrassment of riches for those with the wherewithal to dig it up, polish it, and innovate ways to capitalize on it.

photo
About The Authors
Sam Friedman

Sam Friedman is insurance research leader with Deloitte’s Center for Financial Services in New York. He has been a Fellow with CLM since 2011, and can be reached at samfriedman@deloitte.com. 

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Claims Management
No community events