California Bar Proposes Strict Rule Amendments for AI Integration in Legal Practice
The State Bar of California recently initiated a 45-day public comment period regarding proposed modifications to the Rules of Professional Conduct. The State Bar Standing Committee on Professional Responsibility and Conduct (COPRAC), which bears the responsibility of addressing legal ethics and helping California lawyers comprehend their duties, developed these regulatory changes (Proposed Amendments, Background).
Legal practitioners increasingly utilize generative artificial intelligence applications, such as ChatGPT, Claude, and Perplexity, which “create text, images, or other content in response to user prompts.”
Attorneys deploy these tools for brainstorming, research, drafting, and summarizing complex information. While existing regulations already govern the use of emerging technologies, COPRAC determined that clarifying amendments were necessary. While the risks are pretty obvious, attorneys’ duties and ethics may not be crystal clear.
The committee cited the rapid proliferation of artificial intelligence and “continued examples of fake or ‘hallucinated’ content, including outdated, incomplete, or nonexistent legal authorities appearing in documents filed with the court.” Maintaining privilege and confidentiality is also top of mind.
The integration of both generative and agentic systems offers helpful streamlining capabilities, but attorneys hold a strict obligation to use them consistently with their ethical duties. COPRAC approved the proposed amendments on March 13, 2026, establishing a public comment deadline of May 4, 2026.
This action follows an August 22, 2025, letter from the California Supreme Court. The Court directed the State Bar to consider incorporating principles from its 2023 Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law into the formal ethical rules. Furthermore, the Court instructed COPRAC to address “agentic artificial intelligence tools, which can enable systems to autonomously perform tasks or workflows without human prompting.”
Alongside the proposed formal rule changes, COPRAC proposed updates to the 2023 Practical Guidance, which the Board of Trustees will review at its May 2026 meeting (not public yet).
Rule 1.1: Competence and Independent Verification
Competence establishes the baseline requirement for legal representation. The current Rule 1.1 dictates the basic duties of a lawyer regarding skill and preparation. The new proposal incorporates artificial intelligence as a specific example of relevant technology within Comment [1].
A newly drafted Comment [2] mandates that a lawyer “must independently review, verify, and exercise professional judgment regarding any output generated by the technology” (Discussion/proposal).
IP practitioners regularly analyze highly complex engineering documents, chemical structures, and software algorithms. Integrating generative or agentic systems into patent drafting introduces specific verification burdens. An attorney utilizing an autonomous system to, e.g., compare patent claims to a technical specification assumes total responsibility for the accuracy of that mapping.
The independent review requirement prohibits practitioners from relying blindly on machine-generated technical arguments during Office Action responses. Artificial intelligence systems frequently misinterpret subtle differences between an invention and a cited prior art reference.
IP professionals face a strict duty to confirm the technical accuracy and legal sufficiency of every argument presented to the courts or USPTO (or any other authority).
Rule 1.4: Client Communication and Informed Consent
Client communication standards face distinct revisions under the proposal. The newly proposed Comment [5] to Rule 1.4 requires lawyers to initiate conversations with clients regarding technology use.
The disclosure duty triggers when the technology “presents a significant risk or materially affects the scope, cost, manner, or decision-making process of representation.”
In such instances, the lawyer must share “sufficient information regarding the use of technology to permit the client to make informed decisions regarding the representation.”
For IP counsel, this signifies a need to continuously audit exactly which AI-based tools are being deployed. The rule specifies this communication duty persists throughout the duration of the representation. If the tools change, there needs to be a conversation. A single clause in an engagement letters appears to be insufficient.
Lawyers must evaluate several factors, including “the novelty of the technology, risks associated with the use of the technology, scope of representation, and sophistication of the client.”
A solo startup inventor possesses a different sophistication level compared to a multinational technology corporation with a dedicated intellectual property department.
Attorneys representing varied clients must adjust their disclosure strategies appropriately. Law firms might need to draft specific addendums to their engagement letters detailing exactly which commercial platforms the firm utilizes for, e.g., patent searching or brief drafting.
Corporate clients might implement strict guidelines prohibiting certain software and tools, requiring outside counsel to adapt their workflows to maintain compliance.
Rule 1.6: Confidentiality and Data Exposure
Protecting client data remains an absolute priority, especially in intellectual property law. Unfiled patent applications contain highly sensitive trade secrets. Email conversations may contain confidential information. Infringement investigations often contain privileged legal strategies and/or work product. The risk of exposure is high.
The proposed addition of Comment [2] to Rule 1.6 explicitly defines the word “reveal” in the context of digital security. The proposal states that revealing includes “exposing confidential information to technological systems, including AI tools where such exposure creates a material risk that the information may be used in a manner inconsistent with the lawyer’s duty of confidentiality.”
Uploading an invention disclosure document into a public, unsecured web interface likely falls under this definition. The commercial model might retain the inputted text to train future iterations of the software. That retention potentially results in a public disclosure that legally destroys patentability worldwide.
But it is important to note that the “reveal” under these rules would be the initial exposure (e.g., typing, submitting, and/or uploading) and not only when the data is leaked or is seen (e.g., by an unauthorized human). All that is necessary for a “reveal” is a “material risk” now.
Patent attorneys must meticulously evaluate the data retention policies, terms of service, and privacy agreements of any software vendor.
While the rules do not say that “enterprise” models are a true safe harbor, it is clear that private, enclosed enterprise systems that guarantee zero data retention for training purposes present a lower risk profile compared to free, consumer-facing applications.
The burden rests entirely on the legal practitioner to verify the technical architecture of the platform before inputting any client trade secrets.
Working with staff, contractors, and vendors presents similar issues and training must be conducted. For instance, sharing drafts with foreign associates via automated translation tools requires similar scrutiny regarding data privacy and cloud storage security.
The “reveal” is seen as the initial exposure of the data to the high-risk platform and not a later leak or discovery.
Rule 3.3: Candor Toward the Tribunal and Hallucinations
The introduction of fabricated case law into official court records initiated much of the current judicial scrutiny surrounding automation in legal practice. The proposed Comment [3] to Rule 3.3 directly addresses the duty of candor.
The rule states that a lawyer possesses an “obligation to verify the accuracy and existence of cited authorities, including ensuring no cited authority is fabricated, misstated, or taken out of context, before submission to a tribunal.”
The text specifically encompasses “any cited authorities generated or assisted by artificial intelligence or other technological tools.”
Patent litigators draft extensive briefs involving complex claim construction arguments and invalidity contentions. Generative software platforms might produce plausible-sounding but entirely fictitious Patent Trial and Appeal Board decisions or Federal Circuit opinions.
Practitioners submitting briefs to federal courts, the International Trade Commission, or the Patent Trial and Appeal Board carry the absolute burden of reading and verifying every single cited case.
A citation generator might format a citation correctly but invent the volume and page numbers. The proposed rule makes it an ethical violation to submit such a document.
The independent verification of legal citations requires manual checks using traditional, verified legal databases.
Rules 5.1 & 5.3: Managerial Duties and Nonlawyer Supervision
Law firm partners and supervisory attorneys hold ethical responsibility for the actions of their subordinates. The amendment to Rule 5.1 Comment [1] states that “managerial lawyers must make reasonable efforts to establish internal policies and procedures governing the use of AI, in accordance with the Rules of Professional Conduct.”
Law firms cannot ignore the existence of these platforms or rely on unwritten practices. Firms must actively draft, distribute, and enforce formal written usage policies.
Rule 5.3 addresses the supervision of nonlawyer assistants. Paralegals, legal secretaries, and technical specialists frequently assist in preparing Information Disclosure Statements or formatting patent applications.
The proposed modification to Rule 5.3 dictates that a lawyer must provide “appropriate instruction and supervision concerning all ethical aspects of their employment, including the use of technology in the provision of legal services, such as artificial intelligence.”
If a paralegal uses an unapproved automated tool to summarize prior art or translate a foreign patent document, the supervising attorney bears the ethical responsibility. Law firms must implement training programs for all staff members, not merely licensed attorneys.
Big Changes & Takeaways
The “Reveal” Standard and Redefining Confidentiality
California explicitly defines merely exposing confidential information to a technological system as a potential ethical breach. Under the proposed Comment [2] to Rule 1.6, the definition of “reveal” expands to include “exposing confidential information to technological systems, including AI tools where such exposure creates a material risk that the information may be used in a manner inconsistent with the lawyer’s duty of confidentiality.”
The bar for a revealing confidential data has been lowered. For patent practitioners, uploading unfiled invention information to a public platform constitutes a breach, regardless of whether a human outside the firm ever reviews the prompt, files, or output.
The End of “One-and-Done” Client Consent
A standard, boilerplate clause in an initial engagement letter fails to satisfy the proposed requirements under Rule 1.4. The new Comment [5] clarifies that the duty to communicate exists “through the life of the representation based on the facts and circumstances, including the novelty of the technology, risks associated with the use of the technology, scope of representation, and sophistication of the client.”
If a firm introduces a materially new automated workflow mid-litigation, practitioners bear an affirmative duty to secure updated, informed consent from the client.
Banning “Autopilot” Legal Work
The California Supreme Court specifically directed the State Bar to address “agentic artificial intelligence tools, which can enable systems to autonomously perform tasks or workflows without human prompting.”
In response, the proposed Comment [2] to Rule 1.1 mandates that an attorney “must independently review, verify, and exercise professional judgment regarding any output generated by the technology.”
A licensed professional must maintain authority and verify all outputs before final submission.
Strict Supervision and Mandatory Human Training
Failing to train staff on the hazards of emerging technologies carries severe consequences for firm leadership.
Revisions to Rules 5.1 and 5.3 dictate that managerial lawyers “must make reasonable efforts to establish internal policies and procedures” and provide nonlawyer assistants with “appropriate instruction and supervision” regarding technology use.
If a paralegal generates a fictitious case citation using an unapproved tool, the supervising attorney(s) and partner(s) face direct ethical exposure for failing to train the staff adequately.
Ignorance is No Defense
Attorneys cannot claim a lack of technical knowledge regarding fabricated case law to escape liability. The proposed Comment [3] to Rule 3.3 imposes a strict “obligation to verify the accuracy and existence of cited authorities, including ensuring no cited authority is fabricated, misstated, or taken out of context.”
Incompetence regarding a platform’s tendency to hallucinate provides no shield against disciplinary action under Rule 1.1.
Conclusion
The proposed amendments by the State Bar Standing Committee on Professional Responsibility and Conduct reflect a concerted effort to codify ethical boundaries for emerging technologies. The legal profession faces a permanent shift in how daily tasks are executed.
The public comment period remains open until May 4, 2026. Interested parties can submit feedback through the online Public Comment Form, allowing practitioners to influence the final regulatory framework.
Further proposed changes to the existing Practical Guidance will go before the Board of Trustees at its May 2026 meeting.
Monitoring these California developments carries significant weight for practitioners nationwide. California’s proposed framework sets a strict baseline regarding the independent verification of agentic systems and the continuous requirement for informed client consent.
Other major legal jurisdictions frequently observe California’s regulatory approach to emerging technology.
Ethics committees and state bars in New York, Illinois, New Jersey, and Pennsylvania—and potentially the USPTO—will likely study the outcome of this comment period and any approved amendments. Those states possess a high probability of adopting similar, rigorous standards governing autonomous legal workflows.
Legal professionals across the country should review these proposals carefully to prepare for future compliance requirements in their respective jurisdictions.
Disclaimer: This is provided for informational purposes only and does not constitute legal or financial advice. To the extent there are any opinions in this article, they are the author’s alone and do not represent the beliefs of his firm or clients. The strategies expressed are purely speculation based on publicly available information. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney and/or patent professional.



