Odds are, you’ve caught yourself doing it: you see something so outrageous online that it couldn’t possibly be true (could it?), and your gut reaction is to brush it off as a product of artificial intelligence (AI). Heightened skepticism has become the norm for many in recent years as AI’s capabilities continue to evolve. Shrewd litigants have begun attempting to weaponize this communal skepticism by calling into question the authenticity of evidence offered in courtroom proceedings, claiming an opponent’s legitimate exhibit was actually generated or altered by AI. In this sense, AI’s mere existence has become a tool to attack the credibility of evidence—regardless of whether it was actually used to prepare said evidence—simply by virtue of public doubt as to what is “real” or “reliable” today.
A Case of Deepfake
Taking it a step further, cases are beginning to emerge in which litigants are employing AI to actually create evidence which supports their position. The most alarming example to date occurred in a September 2025 California case—Mendones v. Cushman & Wakefield, Inc.—in which pro se litigants re-purposed an authentic video of a witness to generate a new, “deepfake” version, which they then presented to the court as legitimate evidence. Fortunately, the judge in the case was able to discern the fabrication: the “witness” in the video spoke in a monotone voice and repeated the same facial expressions as if stuck on a loop, and the video generally lacked audiovisual quality.
The exhibit’s debunking resulted in the judge dismissing the case. However, the media coverage and legal commentary that ensued suggested this may only be the beginning of reliability issues in evidentiary determinations: AI has continued to evolve at a rapid pace, and a deepfake video that is easily distinguishable today may be much harder to tell apart from its legitimate counterpart tomorrow.
In reality, our court systems do not yet have unique frameworks for assessing the admissibility of (potentially) AI-generated evidence; instead, courts must rely upon the general rules of authenticity that have long applied to traditional documentary evidence. However, as evidence which has been historically reliable becomes more easily imitated by AI, the existing evidentiary rules will become increasingly insufficient to safeguard litigants and courts from falsified evidence.
Recent Action
In May 2025, the United States Judicial Conference’s Advisory Committee on Evidence Rules discussed revisions to address AI-related problems from two angles: (1) changing authentication rules to combat the introduction of “deepfakes,” and (2) handling of machine-generated evidence in the absence of an expert witness proffered to explain the machine’s functioning. The initial sentiment regarding deepfakes was that a change was not necessary since the federal courts were not seeing much deepfake evidence. However, if such AI-generated evidence continues to find its way into courtrooms, federal and state evidentiary rules may soon be forced to adapt accordingly.
Authenticity issues aside, AI can still serve practical functions in courtroom advocacy: AI may augment the impact of demonstrative evidence by distilling complex issues into a more easily comprehensible format for a jury without inviting challenges to the exhibit’s legitimacy.
This article originally appeared on Freeman Mathis & Gary, LLP. www.fmglaw.com
About the Authors:
Stacy Breaud is a partner at Freeman Mathis & Gary, LLP. stacy.breaud@fmglaw.com
Alex Norton is an associate at Freeman Mathis & Gary, LLP. alex.norton@fmglaw.com