ChatGPT’s 2022 arrival was compelling because of the endless possibilities it seemed to introduce. This new tool was just as capable with the technical feat of coding as the artistic and unstructured process of building out a new game.
Of course, endless possibility and risk assessment are not the optimal combination: When it comes to managing a book of business and, most importantly, staying in business, finite possibilities are preferred.
As we close in on two years since ChatGPT was first launched, some use cases for AI are becoming clearer, opening the door for more extensive analysis from insurance and
risk management professionals. This article will focus on the impacts AI could have on the advertising industry—for both big players and small—as well as the impact of chatbots and how that risk aligns with the Commercial General Liability (CGL) policy: specifically Coverage B – Personal And Advertising Injury Liability.
So You Think You Can Advertise?
Early indications are that advertising, from Madison Ave. to Main Street, will be significantly disrupted by generative AI (GenAI):
- WPP, the largest advertising agency in the world, announced a partnership with chipmaker Nvidia to build out AI content engines that could turbocharge the $700 billion advertising industry.
- On a more localized level, Meta appears to be targeting small businesses with the launching of a GenAI product that will equip any user with the tools to produce advertisements. Companies that previously would not have had the budget or appetite to advertise will soon be able to change that. This product will reportedly be launched by the end of 2024.
While WPP’s announcement will have broad implications, the risk impact may manifest more significantly with small businesses now able to put on their “Don Draper” hats. Advertising professionals at medium-to-large
firms have industry expertise, legal departments, and insurance policies tailored to their risks. For small businesses, this will not be the case.
A reduced barrier to entry for small businesses to use advertising certainly
has its benefits, but the ease of use could create additional exposures for them and their insurers. A local gym could generate ad copy that accidentally infringes on another copyright. An irritated coffee shop owner could disparage a competitor through a harsh meme after he receives word that he is being undercut on price. In addition to jolting your social media feed with more ads, these
tools could change the small business risk landscape and jumpstart claims activity that have not been typical of a CGL policy
Chatbot Use Poised to Pick Up
The increasing use of AI-powered chatbots represents the convergence of business economics and technological advancement. Even before ChatGPT and other AI tools were released, chatbots were commonly recognized as a product that can reduce customer service costs, among other benefits.
AI not only improves chatbots, but also democratizes access to them. Small-to-medium-sized businesses that previously would not be able to afford deployment and maintenance of a chatbot now have resources
at their disposal to interact with customers through a chatbot, and it appears they very much intend to take advantage of this—71% of the 4,500 employees that Forrester Research recently surveyed indicated that their employers were implementing, or
at least are preparing to implement, conversational AI and chatbots.
This potential surge in use significantly increases the pool of companies which may be exposed in ways they, and their insurers, had not previously considered.
Libel, Slander, and Disparagement
The issue of hallucination with AI-powered chatbots has been oft-discussed, and one of the early cautionary tales came in Australia when ChatGPT allegedly produced a response about a sitting mayor that falsely linked him to a foreign bribery scandal. The mayor filed an action against OpenAI (the owner of ChatGPT) for defamation shortly after, but has since dropped the charges, in part because of the challenges in proving defamation.
The high standard to prove defamation does not necessarily provide relief to a GL insurer of a company whose chatbot is alleged to defame an individual or another company. With the duty to defend being broader than the duty to indemnify, one misguided response from an insured chatbot could precipitate a costly defense of a claim.
Another example indicates that chatbots can be provoked into potentially defamatory action. In 2023, an exchange that a frustrated customer had with a delivery service’s chatbot went viral as, after some goading from the customer, the bot began issuing profanities and even referred to the service as the “worst delivery firm in the world.”
Ultimately, this technology is still quite novel, and it is unclear if there are safeguards fully in place to protect against nefarious use from a prompter who prods the chatbot into potentially disparaging individuals or other companies.
Privacy Risks
Breach of privacy claims have mushroomed over the last decade as reams of personal information has become available. This, coupled with strongly written statutes that can lead to stiff penalties, such as GDPR, CCPA, and BIPA, have brought privacy risk to the forefront and have led to a number of claims being brought under the CGL.
Privacy risks have been top-of- mind as chatbots powered by AI models have been deployed. Here are just a few privacy considerations that are emerging:
Breach of privacy through evaluation of training data. Transparency in how AI models have been trained, and what type of data is used, has been a focal point in proposed AI legislation in the U.S., such as the AI Foundation Model Transparency Act. If enacted, this regulation could necessitate the auditing of commercially used AI models, creating a clear view into certain types of personally identifiable information (PII) potentially being illicitly used in the training of the model. Relatedly, this could also exacerbate the risk of infringement claims being brought if the training set is determined to infringe on another entity’s information.
Hacking risk. While cyber risk continues to evolve, one constant
is that black-hat hackers will continue to target the most lucrative opportunities. Given the wealth of information they possess, AI chatbots may very well be the next frontier for nefarious cyber activity (ChatGPT itself leaked personal data as well as conversations in a recent hack.
A company that uses third party technology for use of a chatbot could be exposed to a major hack, as that third party may become an attractive target for cyber criminals given all the different companies, and their data, that they are servicing.
Conflict with existing regulation. Regulation is a primary driver of privacy risk. In addition to eventual legislation that directly addresses AI, we also need to contemplate how existing regulations could be enforced. For example, the Delete Act was recently passed in California, and enables California residents to ask that all data brokers delete their personal data, as well as forbid them to sell or share it. Some believe that data that is used to train a model cannot be deleted because the model cannot be “untrained.” Could that dynamic open up a torrent of claims scenarios for AI that is used in California?
Could a Chatbot Be an Advertisement?
If use of chatbots is to become more prevalent, we may see coverage issues that bring about potentially untested questions. One that may need to be top-of-mind: Is a chatbot considered to be an advertisement, as defined in ISO’s General Liability policy? This is an important question, as ISO’s “Personal and Advertising Injury” definition includes:
“Infringing upon another’s copyright, trade dress or slogan in your ‘advertisement.’”
Here is the definition of “Advertisement” in the CGL:
“‘Advertisement’ means a notice that is broadcast or published to the general public or specific market segments about your goods, products or services for the purpose of attracting customers or supporters. For the purposes of this definition:
“a. Notices that are published include material placed on the Internet or on similar electronic means of communication; and
“b. Regarding web sites, only that part of a web site that is about your goods, products or services for the purposes of attracting customers or supporters is considered an advertisement.”
Could chatbots be viewed by
some courts as a part of a website for
the purpose of attracting customers or supporters? Given the uncertainty, it is important to consider the potential for a chatbot to be considered an advertisement, potentially opening the door for an increase in infringement claims.
The risk posed by any emerging technology is as significant as its
use cases. While it is easy to get overwhelmed by discussion of AI, it is important for risk management professionals to focus on how clients and insureds are using the technology and build out risk and insurance programs accordingly.