The Expert:

Joel Raedeke, Senior Vice President of US Technology, Crawford
QUESTION: There has been a lot of AI hype, promises, and bold predictions. Where have you seen AI deliver best for claims departments so far, and where has it fallen short up to now?
A: There’s no question that AI has brought real momentum to the claims world, but the most meaningful progress can be seen in cases where it augments adjusters rather than replaces them. The claims process is, at its heart, a human one. It’s built on judgment, empathy, and strategy, and our goal is to use technology to strengthen those traits.
We’ve seen AI deliver its best results when it provides timely, contextual insight that helps the adjuster make smarter decisions. Predictive modeling, for instance, can flag claims that are statistically more likely to become litigated and surface the “why” behind that prediction. That allows our teams to intervene early, build trust with the claimant, and prevent escalation. Similarly, large language models now help us detect psychosocial risk factors—things like depression or anxiety cues buried deep in medical documentation—and bring them to the surface so the adjuster and nurse can engage more holistically with injured workers. These are powerful examples of how AI extends the adjuster’s line of sight and improves outcomes without taking ownership away.
Another high-value use case has been information synthesis. Adjusters handle dozens or even hundreds of claims at a time, jumping between files constantly. AI can remind them of their own notes or prior observations, with prompts like ,“You mentioned potential drugseeking behavior here,” which helps them reconnect to their own strategy. This is augmentation in the purest sense: It supports recall and context, but the thinking still belongs to the human.
Where AI falls short is when it tempts us toward speed at the expense of ownership. Clients consistently tell us their favorite adjusters are strategists who know where a claim is going and why. If we automate too aggressively, we risk turning those strategists into button pushers. I could, for example, have a model draft a very convincing claim action plan, but then it’s no longer my strategy. If the adjuster is detached from the reasoning that gives a plan meaning, that loss of connection is a serious risk.
To guard against this, we should take a layered approach. While model accuracy is still maturing, we often route insights through a supervisor or analytics lead rather than pushing them directly to adjusters. Those central reviewers digest the data, decide what’s credible, and then bring it to the field in the language of normal human conversation; not, “The machine said so.” That preserves trust and ensures AI is seen as a teammate;, not a taskmaster.
There are low complexity lines of business and claim types where straight-through automation makes sense, but only when we’re at 100% reliability. For most claims, it is critical to have a person with deep expertise take ownership of claim decisions: Keep the human firmly in the driver’s seat. Just as you wouldn’t let a selfdriving car take over at 95% accuracy, we don’t let a model make decisions that still require nuance.
AI has given us extraordinary new tools for prediction and pattern recognition, but in claims, the real differentiator remains human judgment and accountability. Our job is to ensure those stay front and center.