The Use of AI and the Future of Claims

Are we ready for artificial intelligence to become our business partners?

March 03, 2018 Photo

This article is part of CLM's publication Professional Times magazine, a production of CLM's Management & Professional Liability Community. Click to view previous digital editions of Professional Times.

Artificial intelligence (AI), also called cognitive computing, is the use of machines to think and perform tasks like humans. Smart machines can process massive amounts of data, identify patterns, test hypotheses, and find solutions. The machines can also do all that in record time.

There are two types of artificial intelligence: hard and soft. Hard AI is focused on having machines think like humans, while soft AI is programmed to do jobs that traditionally could be completed only by humans. The main difference is that soft AI is more narrowly focused on a specific task, whereas hard AI involves computer software and systems that do more than the tasks for which they have been programmed in advance—they actually learn as they go, improving performance through feedback.

The quantity of data that we often need to analyze is massive and usually requires an extensive amount of time. Speed and time savings are an issue. There is also the issue of accuracy. If a reviewer is looking for buzzwords or a number, for example, a computer can more accurately identify those. As a result of relying on computers to analyze data, AI is changing the way people think and do business.

Soft AI is used in many businesses. It is used by lawyers for research. It assists industries that need to analyze information in real time, such as news media. It is used by companies trying to determine consumer patterns by analyzing consumers’ habits, ages, genders, satisfaction, and more. AI is used to build models that analyze behavior and find signs of fraud (for example, credit card use patterns). AI can analyze stocks and markets and assist in determining probabilities of success. It is used in our cars, smartphones, search engines, homes (Alexa and robo-sweepers), and surveillance. Video games also use AI in game interaction.

Insurer Use of AI

Putting all this in the context of insurance, there are many aspects of claims that use AI, but how far will this go? It may include reviewing past claims, determining premiums, reporting, evaluating coverage, hiring a lawyer, determining settlement value, and, ultimately, predicting how judges will rule on matters.

We know that AI is constantly being developed for underwriting functions. Actuaries use information from past claims in the industry nationwide to develop the claims history of a specific insured. Through such analysis, insurers determine rates, trends, risks, and pricing. As a result, certain industries—like surveyors and geotechnical engineers—become harder to place. Certain areas become uninsurable for a period of time—like Florida and Texas after storms, and California after earthquakes.

Also apparent is that carriers are constantly looking for ways to save money, and AI offers solutions. Examples include use of third-party administrators that are compensated based on the length of time a file is open; fixed fees for different stages of claims handling; and an early assessment of the settlement value based on the cost of defense. Underwriters already use algorithms for their analyses, so what prevents claims people from doing the same? While it can be asserted that the human element must be taken into account when evaluating the value of a claim, other considerations can be easily computed with a number value. For example, conservative jurisdictions may get a lower number; plaintiffs’ attorneys with a reputation for tenacity or for taking cases to trial will get a higher value. Then, a number can be spit out based on the probability of a defense verdict and the cost.

Clearly, our technology is ready to provide an evaluation of claims, but are we ready to use AI as our judge? Hard AI, while replicating human learning and decision making, does not have the human factor locked down. A computer cannot be sympathetic or mad or wish to punish a party. Nor can a computer come up with a creative solution that did not exist as an option. Often, we see that a case goes to trial with each party advocating their best theory, only to find out that the trier of fact actually looked at something completely different. Can an AI judge do that? What about our constitutional right to a jury of our peers? How can an AI machine simulate our peers with their diversity and history? Will there be a simulator for women, Hispanics, African Americans, and others? What about the courts of appeal? How far can we go? Even if computers get it right 90 percent of the time, does this coincide with our morals?

AI as Counsel

A carrier owes a duty to its insured to provide a defense. Counsel must be competent, but counsel doesn’t need to be the best in the practice. Will an AI attorney be the best? Can this prevent mistakes that are typically associated with counsel, such as blowing the statute of limitation? Can AI write a compelling brief that is emotional or that appeals to public policy?

While the suggestion that AI will be used as counsel seems far reaching, carriers already use AI to review billing statements based on certain buzzwords. As we know, the reviewing software is not aware of the case facts, the complexity, or the need to review a document for three hours instead of three minutes. Neither can the software ascertain that certain discovery must be handled by an attorney and not a paralegal. Is this process ethical, or does it cause a third party to improperly dictate how an attorney should handle a case?

The more we rely on AI, the less we have control. When we lose a trial, we can appeal. When we do not like an attorney, we can replace him. Yet, if an insured loses a case, how can the insured prove that—but for his robo-lawyer—he would have successfully defended the lawsuit? Hopefully, using robo-lawyers is very much in the distant future; however, lawyers today often use AI to assist their practices and will increasingly do so. An attorney uses AI to do research. How can a client prove that his lawyer missed something in his computer research that a reasonable lawyer would not have missed?

When we start allowing our computers to predict human results, one problem that arises is how to regulate it. Currently, the legislation around technology is concerned with data privacy and autonomous vehicles, but that will need to change. AI is now being used more, and it ventures into areas that we may find unacceptable from an ethical or privacy standpoint. We will need to create regulations that can be adaptive. We will need to determine, as a society, how much disclosure needs to be made about these systems while still protecting the integrity of the process. Yet we need to accept that, even if laws are drafted and adopted, they will change as our use evolves.

Surely, AI makes our life easier, but the costs include cyberattacks, privacy breaches, and moral missteps. We accept those faults for convenience, but how far will we tolerate them? We cannot stop this development process, so we must try to plan better. At the very least, we can, and should, self-regulate industry by industry on what is acceptable with respect to the development and use of AI.

photo
About The Authors
Rinat B. Klier Erlich

Rinat B. Klier Erlich is a partner at Zelms Erlich & Mack.  rerlich@zelmserlich.com

Sponsored Content
photo
Daily Claims News
  Powered by Claims Pages
photo
Community Events
  Product
No community events