The Federal Rules of Evidence have evolved over the past 20 years to address changes in technology. The focus has shifted from physical, paper-based evidence to digital evidence and electronically stored information. However, the broad language of the rules does not adequately address the rise of AI-generated content — particularly as such content becomes increasingly difficult for humans to detect.

The Judicial Conference Advisory Committee on Evidence Rules has proposed changes to Rule 901, which governs the requirements for authenticating evidence before it can be admitted in federal court. Proposed Rule 901(c) would specifically address the problem of AI-generated deepfakes.

The Committee also proposed a new Rule 707 that would apply the standards of Rule 702, which governs the admission of expert witness testimony, to AI-generated evidence. It would apply when AI is used to generate output to support or refute claims, establish liability, or calculate damages.

These changes are currently going through the rulemaking process and, if approved, would take effect on December 1, 2027. In the meantime, counsel should carefully consider the existing rules when dealing with AI-generated evidence and be prepared to meet new evidentiary standards.

What Makes Evidence Admissible?

The Federal Rules of Evidence 401, 402, and 403 provide a framework for determining if evidence is admissible. Generally, any evidence that is relevant to the case is admissible unless the Constitution, a federal statute, or other rules provide otherwise. Evidence is relevant if it tends to make a fact of consequence more or less probable. It is a low bar. The fact does not have to be in dispute, but it must have some bearing on the determination of the action.

Under Rule 403, the court may exclude relevant evidence if its probative value is substantially outweighed by a danger of unfair prejudice or confusing or misleading the jury. Additionally, the court may exclude evidence that would waste time, create undue delays, or needlessly present cumulative evidence. This rule acts as what is sometimes called a legal relevance filter, ensuring that evidence does not hamper the fairness or efficiency of the trial even if it is logically relevant.

Rule 901 acts as a hurdle that must be cleared before documents or other tangible evidence can be evaluated for relevance under Rules 401, 402, and 403. It requires that evidence be authenticated to establish that it is genuine and what the proponent claims it to be. The proponent bears the burden to produce evidence to support a finding of authenticity. Rule 901(b) lists 10 nonexclusive ways a proponent can meet this requirement.

The Challenge of AI-Generated Deepfakes

The question before the Committee is whether these rules adequately address the admissibility of deepfakes. Deepfakes are media — images, audio, or video — that have been manipulated or generated by AI to make someone appear to say or do something they never did. Deep learning technology makes it possible to replace faces, edit expressions, and synthesize voices with a high degree of realism. Deepfakes might easily meet the bar of relevance, but authenticating them poses new challenges.

In a 2025 California case, a judge dismissed a lawsuit after discovering that the plaintiffs submitted AI-generated video testimony. The judge noticed glitches such as unnatural eye blinking and looping video clips.

Parties are also using the “deepfake defense” to claim authentic evidence is fake. In 2023, Tesla’s lawyers argued that a 2016 video of Elon Musk making safety claims about Autopilot could be a deepfake. The judge rejected this argument, calling it “deeply troubling” because it would allow public figures to avoid responsibility for any public statement.

The Committee is considering a two-step burden-shifting process for evidence suspected of being altered or fabricated by AI. The opponent must provide evidence sufficient to support a finding that the item is a deepfake. If that challenge is met, the proponent must then prove the evidence is “more likely than not” authentic, a higher standard than the traditional “sufficient to support a finding.”

Applying Expert Witness Standards to AI

The Federal Rules of Evidence include special requirements for the admissibility of technical or scientific testimony. Such testimony must not only be relevant but also reliable and based on solid methods rather than junk science or speculation. The court acts as a gatekeeper to prevent juries from being misled by unqualified witnesses or unsound, overly persuasive opinions.

Rule 702 governs the admissibility of expert witness testimony, requiring that experts be qualified by knowledge, skill, experience, training, or education. The proponent must prove by a preponderance of the evidence that the witness possesses specialized knowledge that will help the trier of fact. The testimony must be based on sufficient facts or data and result from reliable principles and methods that the expert applied to the facts of the case.

Proposed Rule 707 would apply these standards to AI. When AI-generated evidence is offered without an expert, it must still satisfy the reliability requirements of Rule 702. The proponent must show that the output is based on sufficient facts or data, the product of reliable principles and methods, and the result of reliably applying those principles and methods to the facts.

The rule’s primary purpose is to prevent parties from bypassing the rigorous reliability standards required for expert testimony by offering complex machine outputs without a human expert witness. It also reinforces the judge’s role as a gatekeeper to ensure that a jury does not accept opaque, potentially biased AI outputs.

Status of the Rulemaking Process

The Committee has drafted but not yet formally approved an amendment to Rule 901 to address deepfakes and AI-altered media. As of late 2025, the Committee indicated that existing rules can technically handle deepfakes, but they want a formal proposal ready in case of a sudden surge in problematic cases. They also acknowledge that the long timeline of the rulemaking process could justify approval of proposed Rule 901(c) in Spring 2026.

Proposed Rule 707 was open for public comment through February 16, 2026. A final vote by the Committee is scheduled for May 7, 2026. If the Committee votes to approve the change, it must still be endorsed by the Judicial Conference, approved by the Supreme Court, and submitted to Congress. Under this schedule, the rule would take effect on December 1, 2027, at the earliest.

While these changes go through the rulemaking process, attorneys should be aware of the challenges regarding AI-generated evidence. AI technology is evolving rapidly and is often based on proprietary methods that provide no explanation for how they arrive at a specific output. This lack of transparency can conceal bias or faulty reasoning. Many courts are subjecting AI to increased scrutiny, and attorneys should be prepared to validate outputs.

If Rule 707 is adopted, it will have an immediate impact. In the short term, courts will likely take varying approaches to applying expert witness standards to AI. New issues will inevitably emerge that the rule does not address. Attorneys should follow these developments and carefully weigh the value of AI against the standards for reliable evidence.

Prepare for the Legal Ramifications of AI With Purdue Global Law School

Evidence is a complex subject that requires a rigorous legal education. For those interested in pursuing a Juris Doctor (JD), JD program graduates are academically eligible to sit for the California or Connecticut bar or, with an approved petition, the Indiana bar.

For those not interested in becoming a practicing attorney, Purdue Global Law School also offers an online Executive Juris Doctor (EJD) program that includes courses in specific areas of the law that help students better understand legal issues related to their business or profession. Single course offerings on various legal topics are also available.

Request more information today to find the path that works for you.

About the Author

Purdue Global Law School

Established in 1998, Purdue Global Law School (formerly Concord Law School) is Purdue University's fully online law school for working adults.