Throughout 2025, we will see a huge shift in the magnitude of cyberattacks, and we expect continued increase in the coming years. The most typical attacks—business email compromises (BECs), wire fraud, and ransomware attacks (which typically lead to data breaches)—will be even more amplified, leading to bad press, scrutiny, and much bigger paydays for the threat actors.
To make matters worse, these risks are evolving. Threat actors have learned that organizations are no longer interested in paying, or compelled to pay, the ransom when a ransomware attack occurs. There is even more government scrutiny around ransom payments, making it more challenging for organizations to pay when attacked. And because organizations are much savvier at properly securing and backing up data for business continuity, at times, ransom payment is not warranted because organizations can recover their data from backups. With this context in mind, we will see a shift in threat actor tactics in several ways.
First, we expect that threat actors will not only encrypt data in ransomware attacks, but also exfiltrate (steal) data and threaten to publish it if the ransom is not paid. This is called the “double extortion method.” Because more organizations are backing up their data properly, threat actors may skip the encryption process altogether and just exfiltrate the data and threaten to publish if the ransom is not paid. This is more efficient for threat actors since they do not have to provide ongoing support in the decryption process, and it is more effective as organizations might have no choice but to pay, depending on the nature of the stolen data.
Second, threat actors will likely begin to demonstrate more patience before striking, looking to make the biggest impact. In the past, threat actors attacked single organizations in hopes of a quick hit and then moved on if their efforts were not fruitful. This can be laborious and make paydays uncertain. Now, threat actors are more likely to target larger supply chains and vendors. With these more targeted attacks on supply chains, we will see threat actors access vulnerable networks (more often via access brokers) and remain dormant and undetected for an extended time— maybe months—until the time is right to attack. They will sit back, watch email traffic, monitor the network, and figure out who the organization’s key players are, looking for the customers and key stakeholders before striking. They will watch finances and become more strategic and credible when making ransom demands.
Third, there will be continued exploitation of zero-day vulnerabilities. A zero-day vulnerability is a security flaw in technology that a threat actor can exploit before the vendor is aware. As we have seen more recently, zeroday exploitations are a proven way for threat actors to scale attacks to extort more money, especially when so many organizations rely on SaaS products and other outsourced technology.
The market can also expect to see a continuation of supply chain and vendor attacks, which will disrupt businesses in emerging ways. For example, with the increasing connectivity among organizations and consolidation of technology management solutions, vendors are a lucrative target. Threat actors can turn their attention to single points of failure with targeted attacks that impact the vendor and many more downstream customers (and those customers’ customers). Those threat actors targeting the supplier or vendor can also leverage the attack by having the downstream customers put pressure on the victim vendor to pay the ransom because their business is relying on that victim returning to normal operations. This is an evolving tactic that resulted in a huge payout in 2024 when the industry saw the Change Healthcare, CDK Global, and PowerSchool ransomware attacks that affected so many downstream customers.
With that single-point-of-failure strategy, other ancillary threat actors not involved in the direct supply chain/ vendor attack will leverage the event to pose as vendor support (via phishing emails or impersonation in phone calls), gaining unauthorized access to attack and further exploit the situation and monetize quickly.
Threat Actors and the Rise of AI and Machine Learning
Threat actors are beginning to leverage AI in several ways. For a while now, they’ve been using Generative AI (GenAI) to create more convincing social engineering attacks. They are generating more believable phishing emails that read in a tone/style as a trusted colleague. The emails are also translated into various languages across the enterprise and to scale. Phishing emails used to be easier to spot due to grammatical errors and low sophistication. Now AI is used to create the emails, removing the easy-to-spot flags and enabling more seamless social engineering.
Once threat actors are in a network, they use AI to review an organization’s data more quickly to make more credible threats. Instead of manually reviewing internal documents to learn about the organization and its financials, which takes time, threat actors are using AI to help expedite the review. This more sophisticated review could, perhaps, even find the cyber insurance policy or P&Ls and make a credible demand, one that it knows the organization can afford to pay from a financial perspective and can't afford to not pay from a data or operations perspective.
Additionally, threat actors are using AI to automate finding and exploiting vulnerabilities before they are patched. Using AI to write malicious code for ransomware attacks, threat actors make the ransomware industry more accessible to less-technical threat actors.
Legitimate AI agents themselves are also becoming targets of compromise. AI agents are chatbots that customers interact with on certain websites (automated customer service). As more organizations implement AI agents, we expect threat actors to target them. There has been an increase in injection attacks against these agents, where threat actors use the agents to get victims to disclose sensitive information, reset passwords, and disclose passwords, even tricking users into transferring money.
What Does This Mean for Insurance?
Insurers face many challenges around the increased use of AI, beyond just cyber insurance exposure. When their AI tech is compromised, organizations face professional liability and other errors and omissions exposures. For example, with the AI agent attacks, an AI agent was commandeered by a threat actor who quoted the customer a price for a product, and the customer wired the threat actor the money. The customer never got the goods, leading to a thirdparty claim against the company. There will be other challenges with the misuse of AI in real estate, tourism, and other miscellaneous professional services.
For media errors and omissions, we predict a huge copyright exposure with AI because their models rely on large amounts of data sourced from third parties. So far, there seems to be a lack of transparency in the source of data and how the information is stored, presenting potential copyright issues. Defamation and discrimination exposure are also increased because AI can generate inaccurate info and biased outputs that can impact companies, employees, and customers.
For directors and officers insurance, there’s a new concept called “AI Washing”—companies exaggerate the use of AI in their products and services to boost market appeal and inflate valuation. This can mislead investors and lead to regulatory investigations and derivative lawsuits.
On the medical malpractice side, the challenges and risks are also increasing. The number of FDA-approved AIenabled medical devices has increased in recent years, resulting in AI being present in more doctors’ offices (e.g., to synthesize conversation to medical notes). If AI leads to a poor clinical outcome, this could pose medical malpractice risk.
For cyber and tech errors and omissions insurers specifically, the challenges are great. There are privacy concerns, as AI’s ability to process and analyze large volumes of data can undermine efforts at anonymization. AI can potentially identify individuals, even if personal information is not directly included, by correlating and synthesizing information from multiple data points across a dataset. For tech errors and omissions and AI agent misuse, a customer can sue the vendor that built and integrated the product into that customer’s system.
AI and Preventing Cyberattacks
AI and automation can be used by companies, insurers, and threat actors. If AI can help threat actors write and inject malicious code, it can similarly help organizations identify vulnerabilities and write code to patch bugs more quickly. This means no more waiting for patch schedules or manual review, which mitigates potential exposure.
Similarly, AI in data breaches can provide significant cost mitigation because organizations utilizing AI can detect and contain data breaches much faster compared to those not using AI technologies.
Finally, an increased focus on AI to underwrite risks, regardless of exposure, can help scale the volume of insurance submissions. For cyber insurance submissions, underwriters can get through applications faster and focus on the pieces that matter, such as critical risks that need addressing before a carrier is willing to underwrite the risk.
Insurers are adapting their cyber insurance products and policies to address the growing sophistication of threat actor tactics and techniques.
Generally speaking, the traditional approach to insurance has been reactive. If a person or company purchases insurance, the policy will react to an event and ideally will make the policyholder whole again. Over the last several years, cyber insurance has evolved from a reactive financial safety net to a proactive partner and enabler of cyber resilience. Cyber insurance carriers are forced to adapt because the threat landscape is not static. Now, at a minimum, most cyber insurance products have value-added services and continuous monitoring to prevent cyberattacks, and, if needed, they engage highly skilled cyber claims experts with technical expertise to mitigate further exposure.
The threat landscape is constantly changing, and the only way to stay ahead of the game is through proactive insurance that includes services such as ongoing monitoring and threat detection. Cyber insurance carriers are adapting on the underwriting side by requiring specific security measures, such as multifactor authentication (MFA) on endpoints, endpoint detection and response (EDR), patch management, dual authentication for wire transfers, and other pre-policy risk assessments. Some require potential policyholders to list out their tech stack, MSP, or other technology vendors the organization relies upon for aggregation purposes.
Many cyber carriers are integrating endpoint detection tools as part of their offering and building out in-house computer forensic teams to help respond when there is an incident. They use the forensic information when investigating in the underwriting feedback loop and to understand the current threat landscape and threat actors’ techniques and procedures.
About the Author:
Kirsten Mickelson is the cyber claims practice leader at Gallagher Bassett Specialty. kirsten_mickelson@gbtpa.com