Learning to Love Your Data

A five-step guide on implementing predictive analytics for claims management.

April 18, 2014 Photo

As Confucius said around 500 B.C., “Study the past if you would like to define the future.” Claims organizations of all sizes have access to large amounts of data stored within their networks. Predictive analytic techniques can be used to convert this data into knowledge and help the claims organization learn from its past experiences, develop patterns, monetize scenarios, and gain efficiencies throughout all claim processes. This can help the organization gain a dramatic competitive advantage and distinguish its service throughout the industry.

Many claims organizations use data, statistics, and analysis to detect fraud and identify subrogation opportunities. The question before small to midsize carriers and third-party administrators is, “Why do we want to introduce predictive analytics into the management of the claims operation?”

Predictive analytics can:

  • Quickly identify the 20 percent of claims that will result in 80 percent of paid losses.
  • Determine the appropriate adjuster for the claim who can efficiently and effectively move the file to resolution.
  • Assign appropriate resources at the earliest possible moment so you can minimize your losses while meeting your contractual and statutory requirements.

If you accept that predictive analytics does have value and can provide a competitive advantage, the question is, “How do I implement it?” To achieve its true value, a predictive analytics implementation should be approached as a managed process. There are several project management approaches that can be used to develop a successful implementation model. Regardless of the approach, implementation is usually done in five steps.

Step 1: Build the Project Team

The key is to identify an experienced core team initially and let them guide the future strategy. The core team usually consists of a senior project manager and executive management sponsors. The senior project manager will be responsible for selecting, developing, and managing the team that implements the final process. Sponsors will be instrumental in providing cross-functional resources and helping legitimize the process and the changes that occur. 

Step 2: Define Business Objectives

The purpose here is to articulate clearly the business problem, goal, potential resources, project scope, and high-level project timeline. The project team must evaluate the current situation and associated opportunities and threats. Areas that can be improved practically are either intuitive or obvious from past problems. Other methods of defining the project scope may include interviews with subject-matter experts (claims adjusters, supervisors, and managers) or business leaders.

This step also involves the creation of baselines. The baselines will be the measuring sticks to judge improvement. Methods such as gap analysis can be used to compare current baselines against new ones developed as a result of this process.

Step 3: Data Normalization

To measure, we need to know what information is available and what is measurable. For the purposes of the article, we will limit the discussion to structured data, but as your team grows in sophistication, consider unstructured data as well.

The project team will need someone comfortable with extracting data from the system in a form that is usable for statistical modeling. This is the process of normalization. At this point, errors in the data will become obvious—names will not be consistent, commas will be inserted in the wrong places, and so on. To make the data usable, you will need to normalize it, which will be a subproject within this project. However, after the initial subproject, data normalization should be viewed as an ongoing process. New data enters the organization every day, and the accuracy of the model depends upon new, normalized data entering it on a continuous basis.

There are multiple tools that may be used for the purpose of data normalization. Microsoft Excel is a great entry-level tool for smaller carriers and TPAs. Someone experienced in Microsoft Excel can perform much of the sorting, grouping, charting, and analysis required for this step. Pivot tables are very useful in the initial normalization and analysis.

Step 4: Build a Predictive Model

Once you have normalized your data, you will need someone who understands statistical analysis. There are many ways to build a predictive model. One of the simplest models is based on multivariate analysis (MVA).

MVA is based on the statistical principle of multivariate statistics, which involves observation and analysis of more than one statistical outcome variable at a time. In design and analysis, the technique is used to perform trade studies across multiple dimensions while taking into account the effects of all variables on the responses of interest.

Simply put, this analysis can help you to understand cause and effect. The analyst team defines a dependent variable or “Y,” such as “total cost of claim.” Next, they define factors (“X”) that might have an effect on the dependent variable. Once the data is loaded into the model, multivariate analysis can provide an examination of the effect of the factors on the dependent variable. Typical statistical software will return a number of responses within the data, assign a median and mean value for each variable, and calculate a P-value (< 0.05 = statistically significant). This will provide information on outliers (measured in terms of standard deviations from the median), and the output provides insight into your dependent variables.

For example, assume that our dependent variable is “total cost of claim” and our factors are claims criteria (gender, age, area, employer, etc.). Multivariate analysis can help determine claims groups that are statistically significantly different from the population of all claims. From this analysis, you can identify the “critical few” claims with characteristics that make them more likely to have an adverse result. You also can identify claims with potential high severity during the intake process and marshal appropriate resources. Also, you will have the ability to identify claims with little or no potential financial impact, which can then be fast-tracked. Hypothetically, you might conclude that rear-end auto accidents involving a brown car manufactured in 1995 with mileage greater than 150,000 miles will result in C7 fractures. If you see a new claim with similar characteristics, you know to assign it to one of your more experienced adjusters.

Multivariate analysis also can be used to determine the characteristics of those claims most likely to result in certain types of incurred expenses. These include claims that result in litigation, medical case management costs, or expert witness consultant expenses. The model also is useful in identifying recovery opportunities. Further, you can identify characteristics of claims referred to the subrogation unit and special investigation unit.

This seems like an arduous process and, at times, it is. However, it will give you insights that you never would have expected, such as linkage with gender and inadequate reserving, injury and managed care success, and adjuster effectiveness and plaintiff firms. 

Step 5: Develop a Managed Process

Model development is the beginning, not the end. Once you have analyzed the data and trends, a claims file benchmarking review is necessary to validate the conclusions drawn from the data analysis. This is a recurring process, and a system has to be developed wherein this process can be repeated on a regular basis. Most cutting-edge organizations integrate this process within their claims management environment so that there is real-time feedback from the predictive model. 

The process does not end with the review. Once you have confirmed your conclusions, create and implement an action plan. The implementation of your action plan will identify, test, and implement a solution to any of the problems. The diverse project team identifies creative solutions to eliminate the key root causes in order to fix and prevent process problems. Use brainstorming techniques, but try to focus on obvious solutions. Esoteric solutions are difficult to sell to staff and senior management and may create doubt regarding the effectiveness of the entire project. 

Development of a robust monitoring system is critical to sustain the project’s gains. You have presented your observations, analysis, and proposed solutions to management and achieved buy-in for the process changes. The solutions are implemented, and process controls must be monitored to ensure continued and sustained success.

Extra Credit: Claims File Benchmarking Review

A claims file benchmarking review is an important step in predictive analytics. The review should cover a statistically representative sample of claims, and it should consist of open and closed claims from the last 24 months by both office and adjuster. Of the closed claims, you might want to look only at the files for the claims handlers who are still within your company.

To benchmark your claims behaviors, make sure all audit questions are objective (i.e., they can be answered by yes, no, or not applicable). They should measure a broad range of adjuster behaviors, such as investigation, evaluation, negotiation, and reserving. The questions analyze the relationships that are not obvious. This can be as simple as witnessed accidents and injury type. This is where you create questions to test observations gained through your multivariate analysis.

Finally, make sure your operational definitions are clear. Your benchmarking should address adjuster behaviors. Adjuster behaviors can be identified as actions taken by the adjuster. They can include: contact with insured, contact with claimant/claimant’s attorney, liability decision documentation, damages documentation, reserve analysis, and adjuster control of the file.

The views and opinions expressed herein are those of the authors and do not necessarily reflect the views of Armed Forces Insurance, LbGlobalLaw, Westport Insurance Company, Swiss Re or their employees.
 

SIDEBAR

Structured Vs. Unstructured Data

Structured data is easy to see. It is on your computer screen in the form of input fields and the canned reports that hit our desks on the first of the month. View your structured data at the transaction level. Each reserve change, payment, payee, and date is a structured data element. Structured data also may include every premium transaction, including new policy premium, endorsement premium, cancellation refunds, and premium for facultative placements. Unstructured data is what you will find in the claim notes and emails. These are hard to pull out and even harder to analyze.

photo
About The Authors
Multiple Contributors
C. Michael Mattix

C. Michael Mattix, JD, CPCU is general counsel and chief claims officer for Armed Forces Insurance. He has been a CLM Fellow since 2009 and can be reached at mattixopks@gmail.com

Nilanjan "Nicky" Mukerji

Nilanjan “Nicky” Mukerji is chief information officer of LbGlobalLaw. He has been a CLM Fellow since 2009 and can be reached at nicky.mukerji@lbgloballaw.com

Steven R. Henning

Steven R. Henning, MBA, CPCU, ARe, is vice president of Westport Insurance Corporation, a member of the Swiss Re Group. He has been a CLM Fellow since 2009 and can be reached at  steven_henning@swissre.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Claims Management
No community events