Hidden Risks in AI-Based Patent Analysis
How soon before AI can infer client strategies based on attorney search patterns?
The legal landscape is evolving rapidly with the advent of artificial intelligence, and—even though attorneys are generally pretty conservative in adopting new tools—nowhere is this more apparent than in intellectual property law. While attorneys often adopt a heightened caution when dealing with confidential information, IP practitioners need to be aware that even working with public info could potentially risk revealing client strategy and attorney thought processes.
How soon before an AI model is able to infer a trial strategy based on nonvisible patterns from an attorney’s Google searches?
AI tools offer unprecedented capabilities for analyzing vast amounts of data, including complex legal documents like granted patents and published patent applications. However, for practitioners venturing into this new frontier, it's crucial to understand the potential ethical pitfalls involved, particularly concerning confidentiality, privilege, and the subtle revelation of legal strategy.
In considering whether to use AI in patent analysis, it’s important to consider and discuss the risks across a spectrum of types of AI tool usage and the gradient from low to high risk of revealing confidential information. It’s important to note that risks of prematurely revealing invention information was addressed previously, while this post focuses on the seemingly innocuous situations around analyzing publicly available materials that might actually reveal attorney and/or client strategy.
The Allure of AI in Patent Analysis
Patents are inherently complex legal documents, often hundreds of pages long, with intricate claims, detailed specifications, and references to extensive prior art. Manual analysis is time-consuming, prone to human error, and can miss subtle connections. Let’s be honest—some specifications have rushed translations, are poorly organized, or find other ways to hide the ball!
AI, with its ability to rapidly process and identify patterns, promises to enhance efficiency, improve accuracy, and uncover insights that might otherwise remain hidden. From identifying key claims to cross-referencing cited art to flagging potential infringement, AI appears to be a transformative tool. Whether it’s a single patent or a portfolio, AI tools are poised to be a patent professional’s new best friend.
Ethical Foundations: Privilege, Confidentiality, and Work Product
Before jumping into AI's specific risks, it’s vital to revisit some core ethical obligations of an attorney:
Attorney-Client Privilege: Protects confidential communications between a client and their attorney for the purpose of seeking or providing legal advice. Revealing such a communication could break privilege.
Duty of Confidentiality: A duty of confidentiality prevents an attorney from revealing any information relating to the representation of a client, regardless of its source. Courts routinely uphold this with or without a separate NDA.
Work Product Doctrine: Protects materials prepared by an attorney in anticipation of litigation, shielding their mental impressions, theories, and strategies. Revelation of some work product could give away your courtroom playbook and/or potentially open the door to other sensitive documents at trial.
The central question when using AI is whether the input of information into an AI system, or the AI's processing of that information, breaches these protections and/or duties.
The Spectrum of AI Tools: Public, Enterprise, and Local
Not all AI tools are created equal, and the risks vary significantly based on the platform used:
Publicly Available AI (e.g., Gemini, ChatGPT): These are general-purpose Large Language Models (LLMs) accessible to anyone. They are typically cloud-based, and their terms of service often allow for user inputs to be retained and used for model training.
Risk Profile: Highest. The primary concern is the lack of confidentiality. Inputting any information, even seemingly innocuous public data, into these tools risks it becoming part of the training data or being accessible to the AI provider. This creates a significant risk for unintended disclosure, especially if the prompt contains any confidential or strategically revealing information.
Enterprise-Level AI Solutions (e.g., Thomson Reuters' CoCounsel, LexisNexis AI): These are specialized AI platforms designed for legal professionals. They operate under strict data privacy agreements, often featuring "no data retention" policies for client inputs, and are typically hosted on secure, segregated servers.
Risk Profile: Moderate to Low. These tools are built with attorney ethical obligations in mind. Their terms of service and technical architecture are typically designed to prevent the disclosure or misuse of client data. However, attorneys must still diligently review the specific terms and conditions of each provider to ensure they align with ethical duties. There's always a residual risk tied to cloud infrastructure and potential (though highly unlikely with reputable providers) data breaches.
Locally Hosted/On-Premise AI Models: These are AI models that run entirely on a firm's or client's private servers, without transmitting any data to external cloud providers.
Risk Profile: Lowest. Since—if setup properly—no data leaves the controlled environment, the risk of external disclosure is minimized. This offers the highest level of confidentiality. The main challenges here are the significant computational resources required to host and run such models, and the need for in-house technical expertise.
The Gradient of Risk: From Summaries to Strategic Revelation
The risk isn't just about the tool; it's also about what you ask the AI to do and how specific your queries are.
Hypothetically, such actions may create a gradient of risk concerning any inadvertent revelation of strategy when using an AI tool:
Lower Risk: Summarizing Public Documents:
Action: Asking an AI (e.g., a public LLM) to summarize the key claims of a publicly available patent or publication, identify independent claims, or provide a general overview of a patent's scope.
Why it's likely a Lower Risk: The information is already public. Such a query is generic and likely doesn't reveal any specific attorney thought process or client-specific concerns. No confidential information is input. While there may be some risk of revealing strategy depending on the context (e.g., reviewing a portfolio for acquisition), summarizing public documents using an everyday AI likely carries low risk.
Low-to-Moderate Risk: Analyzing Common Legal Ideas/Concepts:
Action: Inquiring about general legal principles, such as "Explain the doctrine of equivalents," "What are typical arguments for patent invalidity," or "Describe common strategies to avoid patent infringement."
Why it's likely Low-to-Moderate Risk: The practitioner is seeking general knowledge that the AI can provide from its public training data. No client information is revealed. The risk increases slightly if a pattern of general inquiries, over time, could hint at a more specific area of strategic interest (e.g., repeatedly asking about obviousness defenses), but this is usually too broad to be problematic for specific cases and/or clients. However, if an AI is able to identify patterns that humans might not see…
Moderate-to-High Risk: Focused Analysis of Public Documents:
Action: This is where the line may begin to blur. Asking an AI to "Analyze U.S. Patent 123,456,789 for potential prior art challenges based on the attached public document X and article Y" or "Compare the claims of the ‘789 patent to competitor's public product Z for potential literal infringement."
Why it's likely Moderate-to-High Risk: While the patent, prior art, and competitor's product are public, the specificity of your focus and the particular combination of documents the attorney asks the AI to analyze can subtly reveal the firm's or client's strategic thinking. The inquiry isn't just about the public document; it's about an angle on that document. If a prompt history with a public AI were ever compromised or accessible, it could potentially reveal your investigative avenues, areas of concern, or even theories of the case. A client could reasonably be concerned if such focused inquiries, even with public data, were input into a tool with unclear confidentiality.
High Risk: Inputting Actual Strategy or Confidential Information:
Action (and a clear no-go for public AIs): Providing confidential client information, internal work product, or explicit strategic decisions to the AI. Examples include, "Our client’s defense strategy for patent 123,456,789 relies on demonstrating non-enablement, given our internal testing results. What are the weaknesses of this approach?" or "Based on our confidential market analysis, identify potential infringement targets for our patent X."
Why it's likely High Risk: It should be clear that this is likely considered a breach of confidentiality and potentially a waiver of privilege/work product. The practitioner is explicitly sharing sensitive information with a third party (the AI provider) that is not bound by attorney-client privilege. The information could be retained, used for training, or even potentially exposed through a data breach. Most attorney would consider this as an absolute ethical red line for public AI tools.
AI's "Dot-Connecting" Power: The Future of Risk
Beyond just the direct input of strategy, a more subtle risk emerges from AI's advanced analytical capabilities. Modern LLMs are adept at identifying hidden connections and patterns within text.
An attorney might input several seemingly innocuous public documents, perhaps from different cases or related but separate matters. However, an advanced AI could potentially “connect the dots” between these discrete public inputs, identifying a pattern that reveals a broader strategic approach employed by a particular firm or by the attorney themselves across multiple clients.
Furthermore, as AI capabilities evolve, the risk of "de-anonymization" increases. While an attorney might attempt to generalize or anonymize prompts (e.g., "a company in the ‘internet search’ sector" instead of "Google"), an advanced AI, given enough context and data, might be able to infer identifying information about a client or case by cross-referencing with other public data.
This predictive capability of AI to infer relationships or identities, even from ostensibly "public" or "anonymized" data, is a burgeoning area of concern and will be the subject of future conversations here and elsewhere.
Attorneys, clients, and the public cannot assume service providers (or lawmakers) will willingly step in to save their personal identifying information from the robots’ quasi-omniscient inspection.
Conclusion
The integration of AI into legal practice is inevitable and holds immense promise for improving efficiency and analysis in patent law. Attorneys, however, must approach this technology with a robust understanding of their ethical obligations.
The choice of AI tool (public vs. enterprise vs. private) can be vital, as is the specificity of the information shared with the AI.
While summarizing public documents with AI likely only poses minimal ethical risk, providing strategic insights or confidential client information to public AI platforms is a perilous path that can lead to breaches of privilege, confidentiality, and client trust.
Transparency with clients, diligent vetting of AI vendors, and maintaining rigorous human oversight are paramount. As AI capabilities grow more sophisticated, so too must the attorney's vigilance in protecting the sacred trust of client confidentiality and the integrity of legal strategy.
Disclaimer: This is provided for informational purposes only and does not constitute legal or financial advice. To the extent there are any opinions in this article, they are the author’s alone and do not represent the beliefs of his firm or clients. The strategies expressed are purely speculation based on publicly available information. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney and/or patent professional.