February 19, 2026

Deepfakes and the Shifting Burden of Scrutiny for Litigators

The increasing sophistication of AI has made it easy to manipulate evidence, and taking steps to triage evidence with procedural diligence and technological support has become the litigator's burden, writes Phil Beckett, the managing director and practice leader of disputes and investigations for the U.K. at Alvarez & Marsal.

The emergence of Large Language Models (LLMs) to create deepfakes and other manipulations of evidence poses a serious threat to the integrity of evidence in both civil and criminal trials. What once seemed a distant concern has become a pressing reality. Two questions now dominate the discussion: are existing evidentiary rules fit for purpose in the face of this technological shift, and how can litigators manage the growing responsibility already being placed upon them?

The Challenge

In a recent case in the U.S., plaintiffs sought a summary judgment, and in support of their request, they provided an assortment of videos, photographs, and screenshots of digital communications. However, after examining the evidence, the Court found that several of the exhibits had been fabricated or manipulated through AI technology, leading to the termination of the case. While the plaintiffs represented themselves in this case, the growing presence of AI-generated evidence in court is becoming increasingly concerning. 

AI-generated deepfakes and other manipulations of evidence present a novel challenge for litigators: identifying and managing falsified evidence when reviewing thousands of documents. Evidence often comes in the form of PDFs, scanned images, or other non-native formats uploaded into document review platforms, making it harder to apply automated detection methods. 

For lawyers, prevention is therefore largely procedural. It involves obtaining documents in their native form in a forensic manner wherever possible, securing chain-of-custody, and ensuring that all parties adhere to evidential standards. Yet, the real challenge is navigating the vast middle ground of litigation: the review of thousands of files, many of which may hide subtle falsifications.

The Litigator’s Burden 

While the burden of proof falls on the party relying on a piece of evidence, the litigator’s burden is increasing as faking becomes exponentially easier. Gut instinct is not enough in an era of hyper-realistic fakes. Litigators must triage evidence with procedural diligence and technological support, and escalate concerns to experts early on. 

The key issue is the need to challenge, which means spotting suspicious documents in the first place, then being able to document and justify those suspicions before demanding corroborative evidence, which may require expert analysis and/or access to native files. With growing expectations for litigators to proactively verify the provenance of documents, they will need to engage promptly with forensic experts for assistance in conducting metadata analysis and detecting patterns and stylistic anomalies.

When to Be Suspicious 

Look for documents that contradict established facts, contain details inconsistent with the broader narrative, appear to lack context—such as emails or messages with no preceding thread—or display inconsistencies in style or tone. Once a document raises concerns, there are immediate steps to take before escalating to full forensic analysis. Prioritise emails and documents that lack context, as these are often the most likely to be falsified. Check for inconsistencies in dates, sender or recipient details, or unexpected attachments.

The use of secure internal generative AI tools can help triage large volumes of material—an LLM can be trained on a set of verified emails and then flag anomalies within suspect ones, providing an efficient first line of defence.

 Forensic experts can then perform a deeper investigation to assess manipulation once native files are available. This often begins with metadata analysis—reviewing timestamps, device information, and document origin. 

In PDFs or images, forensic tools can reveal duplicate compression artefacts suggesting a pasted element, or unusual font embedding which is inconsistent with the alleged source. For video and audio, waveform analysis or pixel-level irregularities can highlight synthetic content. Deepfake tools often leave distinctive statistical “fingerprints” that can be surfaced through advanced forensic analysis. 

Pattern recognition across datasets can also be telling—for example, where emails display slightly different phrasing, headers, or routing paths compared to genuine correspondences. 

It is often necessary to investigate the device or system suspected of creating the fake. Traces of GenAI usage including cached files, browser history showing prompts to AI tools, or application logs recording the generation of synthetic content. Equally, investigators should consider any downloaded offline AI models, or whether scripts exist on the machine that interact with AI APIs. These artefacts can provide strong circumstantial evidence that the user had the capability and intent to generate synthetic documents.

Impact on Existing Evidentiary Rules

Existing evidentiary rules, such as the best evidence rule, chain-of-custody requirements, and authentication standards, already provide the necessary legal frameworks. Emerging professional guidance from the U.K.’s Law Society, the Bar Council, and the American Bar Association, emphasises lawyers’ duty to verify and corroborate evidence, and to document steps taken to satisfy their professional obligations of competence and honesty before the court.

Taken together, the trend is for the burden of proof for authenticity to shift upstream. Lawyers can no longer rely on the appearance of a document or the traditional chain-of-custody. Instead, they must combine triage methods, AI-assisted detection, and forensic verification, and maintain clear documentation of any suspicions. This makes the middle stage of litigation— reviewing thousands of documents, identifying which are impactful, and deciding which require further expert scrutiny—the most challenging and critical step.

In the recent U.S. case, the fakes were unsophisticated and easily detected, but deepfakes will only become more convincing. Litigators who wait until trial to challenge suspicious documents will be on the back foot. Proactive triage, AI-assisted review, and early engagement with forensic experts are essential, ensuring courts can continue to rely on the authenticity of evidence in an age of hyper-realistic fakes.

Phil Beckett is the managing director and practice leader of disputes and investigations U.K. at Alvarez & Marsal.

Reprinted with permission from the 11 February 2026 edition of Law.com International © 2026 ALM Global Properties, LLC. All rights reserved. Further duplication without permission is prohibited, contact 877-256-2472 or asset-and-logo-licensing@alm.com.

 

Authors
FOLLOW & CONNECT WITH A&M