Artificial intelligence is ubiquitous. The top application downloads in 2025 were AI search app for mobile devices. Young professionals entering the workforce are now considered “AI natives.” You cannot attend an industry conference without hearing about how AI is revolutionizing your industry. AI products are rapidly being developed and deployed to increase productivity.
With the deployment of all these new AI products, in all these new ways, organizations are increasing their attack surface for threat actors to exploit. It also introduces supply-chain vulnerabilities with the reliance on third-party AI tools and model integrations. And, as no surprise, the threat actors seem to always be a few steps ahead of the game.
Cyber claims professionals have a front-row seat when it comes to how threat actors are weaponizing AI for personal and financial gain. There is an increased use in AI by these threat actors, and the industry expects that in the near future, the majority of reported cyber incidents will have an AI component to them.
AI-driven Phishing Attacks Are on the Rise
Those same phishing emails that were poorly written, with spelling and grammatical errors and obviously fake reply-to addresses and embedded images, are now being refined and recreated as highly personalized messages with realistic images, thanks to AI. They are also showing up in SMS messages, phone communications, and social media outreach. The goal is the same: to access sensitive information, get into a system, receive funds, or prompt a user to install a malicious file that might deploy ransomware. These same threat actors can scale phishing attacks, even translating to multiple languages, to increase chances of success.
AI-generated video, image, or audio, known as deep fakes, are baiting targets through manipulated video or voice recordings. These sometimes feature a vendor or client asking the recipient to wire funds to a new account or allow system access. In fact, early last year a threat actor used AI to create a deepfake video conference complete with a fake CFO and other colleagues. The real finance colleague was invited to the video call. It was so realistic that it convinced the employee to wire $25 million to a fraudulent account.
Ransomware attacks are leveraging AI to automate ransomware deployment. This includes using AI to research targets, identify system vulnerabilities, write malicious code, and detect access points. AI is also being used to harvest credentials to get into systems and remain undetected while doing reconnaissance. This weakens the barrier for entry.
Once the threat actor is in the system, they can steal hundreds or thousands of documents and quickly sift through data that would have previously taken days, if not weeks, to go through. With AI, data can be quickly digested, which helps increase credible threats made in real time.
When negotiating with threat actors on the ransom payment, AI-powered chatbots are now being used. These allow threat actors to mimic the tone and appearance of legitimate customer service portals, introducing a chilling level of professionalism to what is essentially extortion and dealing with criminals.
Prompt injections in generative AI systems are being used to attack organizations. Threat actors trick AI systems in their prompts to reveal confidential information such as customer data, trade secrets, internal policies, or API keys. Malicious prompts can manipulate AI systems with access to integrated tools to perform actions such as sending emails, initiating transactions or revealing internal code.
AI Cyber Crime and Cyber Liability Insurance
Cyber insurance policies will typically cover the majority of AI-related cyber claims, depending on the circumstances of the incident and specific language of the policy. Unauthorized access to the victim’s system and compromised data is also typically covered. In the cases where a third party hosts an organization’s data, the holding organization is responsible for the data and must notify data subjects in the event their information is compromised. This is also typically covered by a cyber insurance policy. It may also help the victim tender the costs to the vendor whose system was compromised. Similarly, a tech E&O policy may cover third party claims for failure to implement services correctly in the event of the vendor’s AI product integration in a client system that was compromised.
Phishing, wire fraud, and ransomware are typically covered under cyber insurance policies. Deep fakes threats require an added layer of analysis when selecting a policy because coverage will depend on how regulated information was obtained in the event of an attack.
Increased concern around weaponizing AI for cyber crimes has led to carriers adding AI-related endorsements to their policies that clarify or provide coverage for emerging risks related to AI. Endorsements providing coverage for non-compliance with new laws and regulations governing AI, such as the EU AI Act and the US enacting AI laws are indicative of where this type of coverage is going to be more common.
Mitigating AI-Related Risks
Previously recommended controls like multifactor authentication (MFA), dual authentication for wire transfers, encryption, date and network segmentation, and viable backups are now table-stakes cybersecurity requirements. However, with new cyber threats encompassing AI, organizations should layer their defenses.
Changes in the AI environment require all employees to stay vigilant to provide the best protection. Be aware of urgent messages asking for money or credentials and sudden changes in banking details. The most common social engineering claim is impersonating a vendor and sending emails with new banking details to clients when invoices are due. It is rare to change bank information in both personal and professional instances. It is especially rare for an organization to change banks/accounts when a large payment is due.
Maintain awareness of these communications and follow protocols through employee training so your organization can create a culture of cybersecurity awareness. Empower employees to speak up when something seems off. Employees should feel comfortable speaking to a senior leader if they have a question or concern about the legitimacy of an email or funding request. An organization’s employees are its biggest asset and in the best position within the organization to prevent cyberattacks.
Leaders should understand how and where AI is used throughout their organization and conduct a risk assessment of all AI. Is there a public-facing chatbot? Are there third-party integrations with other products? And if so, where are these products hosted? Where else in the organization is AI relied upon? What is the organization’s policy about shadow AI? And what sort of information is going into any Large Language Model (LLM) and who “owns” that data? What if there is unauthorized access into that LLM? Is confidential company information being copied and pasted into GenAI tools?
For AI products that are provided or hosted by a third party, organizations should review those contracts to understand all limitations of liability, insurance requirements and other provisions to safeguard data, including ensuring an organization’s vendors carry sufficient limits in cyber insurance.
Implementing and enforcing MFA, is critical and not just on email accounts. It should be used for any endpoint and enforced on all admin accounts and on VPNs.
When an attack happens through a VPN that doesn’t have an MFA, the threat actor can fully infiltrate the network, as we’ve seen in recent claims occurring more often. Since threat actors are now more successful in bypassing MFAs, the gold standard has changed to using a biometric factor for authentication.
To further protect themselves, organizations should have a tested incident response plan (IRP) that gets updated and tested routinely. This ensures the organization is in the best position to mitigate further risk and exposure if/when an AI related cyber-attack happens.
Organizations can best protect themselves by understanding where and how AI is used throughout their organization and vendors so programs can be monitored and patched quickly. They should also keep current with evolving regulatory framework as states develop their AI laws. Lastly, it’s critical to regularly review changes in your company’s use of technology and AI to ensure all policies will provide adequate coverage in the event of an attack.