Duke Law Prof. Proposes 'Reliability' to Solve Patent Law’s AI Dilemma
AI-assisted disclosures challenge patent policy
A recent academic paper by Duke Law School Professor Arti K. Rai argues that the rise of artificial intelligence in research and development presents a pivotal moment for the patent system. The paper, “The Reliability Response to Patent Law’s AI Challenges” from June 2025, suggests that instead of creating bespoke rules for AI, the solution lies in reinforcing a long-standing but often downplayed principle: scientific reliability.
The core problem addressed is the potential for AI to generate a deluge of speculative ideas, threatening to clog the patent office and the public domain with low-quality information that could undermine incentives for genuine, rigorous innovation.
The paper posits that by strengthening reliability requirements across patent doctrine, the system can adapt to AI, preserve the essential role of human inventors, and continue to foster meaningful technological progress.
As Prof. Rai states, “Pervasive Al use adds newfound importance to longstanding debates over patent timing and reliability” (Rai, p. 1).
The Problem: The Specter of Speculative AI-Generated Inventions
The paper frames the central challenge AI poses not as one of machines replacing humans, but of quantity overwhelming quality. As a “general purpose technology, Al use has the capacity to rapidly and cheaply generate inchoate ideas across all industries” (p. 4). This capability creates two significant risks for the intellectual property landscape.
First, it could encourage a “counterproductive racing to the patent office” where entities file patents on speculative, AI-generated concepts that have not been properly vetted or developed (p. 4).
Second, it could lead to these same speculative ideas being dumped into the public domain, creating a minefield of prior art that blocks patents for those who do the hard work of turning an idea into a viable invention.
Prof. Rai’s perspective on this challenge is clearly articulated:
Absent vigilance on the question, counterproductive racing to the patent office is likely to increase. Even if inchoate ideas are simply put into the public domain, the lax standards for what constitutes prior art may defeat the possibility of patent incentives for careful researchers. The net result will be diminished innovation (p. 4).
This worry is credible because it grounds the futuristic discussion of AI in the pragmatic realities of patent prosecution and litigation.
For inventors and patent attorneys, a landscape cluttered with unreliable prior art makes clearance searches more difficult and patentability assessments less certain. It raises the risk that a significant investment in R&D could be thwarted by a cheaply generated, purely theoretical disclosure.
Proposed Solution: Weaving Reliability Through Patent Doctrine
The paper argues against a radical overhaul of patent law and instead advocates for fortifying existing doctrines around the principle of reliability. This approach would raise the bar for both obtaining a patent and for using a disclosure to invalidate one.
1. Fortifying the Utility Requirement
The paper points out that the patent system’s existing “utility” doctrine is a low bar that is ill-equipped to handle the scale of AI-generated ideas. Currently, the doctrine “generally fails to require that patent applicants demonstrate more than a highly cursory level of credibility for their claims about their invention” (p. 4).
To address this, the proposed solution is to elevate the standard. Patent applicants would need to “make the case, whether through analytical reasoning, rigorous testing of the end product, disclosure of sound discovery methods, or a combination of the above that the ordinary scientist or technologist in the field would find their invention reliable” (p. 25). In practice, this means an inventor could no longer simply hypothesize a use; they would need to provide concrete evidence of the invention’s scientific soundness.
This push for reliability is consistent with other recent trends in patent law, particularly regarding the enablement doctrine. As the paper highlights, the Supreme Court’s decision in Amgen v. Sanofi used the enablement requirement—which demands that a patent teach how to make and use the full scope of the invention—to invalidate broad claims that amounted to little more than “research assignments” (p. 13).
2. Strengthening Prior Art Standards
The problem of speculative ideas also applies to what can be used to defeat a patent. The paper argues for symmetrical standards for patents and prior art.
Just as the scientific standards for patents should be raised, so should the standards for what constitutes prior art....One straightforward change would require that a disclosure should not count as prior art unless the disclosure identifies a specific, scientifically plausible use for the scientific or technical item disclosed (p. 27).
This change would prevent AI-generated “laundry lists” of chemical compounds or other theoretical concepts from automatically invalidating a later patent filed by a researcher who actually made and tested one of those compounds. For a document to serve as prior art, it would need to meet a baseline of scientific reliability itself.
3. Reforming Inventorship and Obviousness with a “Safe Harbor”
Concerns that using AI could jeopardize a patent by negating human inventorship or making an invention seem “obvious” have led many applicants to avoid disclosing its use. The paper proposes a solution that turns this dynamic on its head. It suggests that disclosing high-quality, human-guided AI use could actually protect a patent.
The author proposes that applicants who disclose their rigorous use of AI could be “provided safe harbors against inventorship and non-obviousness challenges” (p. 5).
This framework reframes the human contribution away from the initial “flash of genius” and toward the skill involved in constructing prompts, designing and training specialized AI models, and conducting experiments to validate the AI’s output.
This provides a path for inventors to be transparent about their methods in exchange for greater certainty about their patent’s validity.
Examples: The Case of AI in Drug Discovery
The paper uses the highly patent-sensitive field of drug discovery and development (DDD) to provide an empirical illustration of the current problems. Through an original analysis of patents from “AI-native” drug discovery firms, the research uncovers a striking lack of transparency. This omission is not an accident but likely a strategic choice driven by fear.
As I demonstrate using an original dataset of drug patents likely derived through the use of AI, these patents say virtually nothing about how Al was used (p. 5).
The paper’s data reveals the extent of this issue. A review of 135 patents on biological outputs (like molecules) from these firms found that “only 4 mentioned AI use in any aspect of the disclosure” (p. 23).
This is happening because “patent applicants may be concerned that disclosure could jeopardize inventorship and non-obviousness” (p. 5). However, this strategy creates a long-term risk. A savvy defendant in an infringement lawsuit is “likely to raise issues of improper inventorship and obviousness” by using the patent owner’s own public marketing materials about its sophisticated AI platforms against it (p. 24).
The current state of affairs encourages secrecy, which hinders scientific evaluation and leaves patents vulnerable to future challenges.
Closing Thoughts
The research presented in “The Reliability Response to Patent Law’s AI Challenges” offers a pragmatic and insightful path forward. It argues that the patent system can accommodate profound technological change not by creating complex, technology-specific rules, but by returning to its first principles.
By elevating the importance of scientific reliability, the system can filter out speculative noise, reward careful and rigorous research, and preserve a central role for human ingenuity in the age of AI.
The paper makes a compelling case that proper use and disclosure of AI can and should bolster the reliability of patents. The human input required for such high-quality use “could provide a powerful defense against the challenges to human inventorship and non-obviousness that AI use raises” (p. 33).
As AI tools become more integrated into the innovation lifecycle, ensuring the patent system incentivizes verifiable progress over mere speculation will be essential to its continued relevance and integrity.
Full Citation: Rai, Arti K. (2025). The Reliability Response to Patent Law’s AI Challenges. Preprint research paper. Available at SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5247266.
Disclaimer: This is provided for informational purposes only and does not constitute legal or financial advice. To the extent there are any opinions in this article, they are the author’s alone and do not represent the beliefs of his firm or clients. The strategies expressed are purely speculation based on publicly available information. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney and/or patent professional.