Anticipated 2026 Revisions to California's AI Practical Guidance
Proposed RPCs necessitate additional guidance
The State Bar of California’s Standing Committee on Professional Responsibility and Conduct (COPRAC) issued the “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law” in 2023.
This “Guidance” document functions as a set of guiding principles based on existing professional responsibility obligations. The California Supreme Court recently directed the State Bar to incorporate these principles into the formal rules and address autonomous agentic systems. These proposed rules are a bit more strict.
The resulting proposed amendments to the Rules of Professional Conduct compel COPRAC to reevaluate and revise the 2023 Practical Guidance ahead of the May 2026 Board of Trustees meeting. Here is a good guess of what will change.
Scope Expansion: Integrating Agentic Technology
The 2023 AI Guidance focuses almost exclusively on generative applications. The anticipated revisions will expand this scope to cover agentic systems. Agentic models operate autonomously, executing multi-step workflows without continuous human prompting. This access and freedom presents greater risk for those in the legal field.
COPRAC will likely update the introductory sections of the guidance to define agentic systems and outline their specific operational hazards compared to reactive generative tools.
Conflict 1: The New Disclosure Standard
2023 Practical Guidance: “The lawyer should consider disclosure to their client that they intend to use generative AI in the representation...” (Guidance, p. 4).
Proposed Rule 1.4 Amendment: The new Comment [5] states that a lawyer “must communicate sufficient information regarding the use of technology to permit the client to make informed decisions” (Proposed Amendments).
Analysis: The 2023 Guidance frames client communication as an optional consideration. The proposed amendment converts this into a strict mandate. Attorneys can no longer rely on internal discretion; they face an affirmative duty to disclose technology usage that presents material risks or affects the scope of representation.
Conflict 2: The Supervisory Suggestion
2023 Practical Guidance: “Managerial and supervisory lawyers should establish clear policies regarding the permissible uses of generative AI and make reasonable efforts to ensure that the firm adopts measures that give reasonable assurance that the firm’s lawyers and non lawyers’ conduct complies with their professional obligations when using generative AI...” (Guidance, p. 3).
Proposed Rule 5.1 Amendment: The proposed modification dictates that “managerial lawyers must make reasonable efforts to establish internal policies and procedures” (Proposed Amendments).
Analysis: The 2023 text presents policy creation as a best practice recommendation. The proposed rule elevates this to a formal ethical obligation, subjecting firm leadership to disciplinary action for failing to implement structured, written policies governing automated systems.
Conflict 3: The Third-Party Data Standard
2023 Practical Guidance: “A lawyer who intends to use confidential information in a generative Al product should ensure that the provider does not share inputted information with third parties...” (Guidance, p. 2).
Proposed Rule 1.6 Amendment: The proposal defines “reveal” as “exposing confidential information to technological systems... where such exposure creates a material risk” (Proposed Amendments).
Analysis: The 2023 Guidance focuses heavily on reviewing vendor terms of service. The proposed amendment establishes a stricter baseline where the mere act of exposure constitutes a breach if a material risk exists. Practitioners face a higher burden of vetting system architecture before inputting sensitive client data.
Conflict 4: The “Starting Point” Assumption
2023 Practical Guidance: “Al-generated outputs can be used as a starting point but must be carefully scrutinized.” (Guidance, pp. 2-3).
Proposed Rule 1.1 Amendment: The Supreme Court directive specifically highlights “agentic artificial intelligence tools, which can enable systems to autonomously perform tasks or workflows without human prompting” (Proposed Amendments, Background). Comment [2] states a lawyer “must independently review, verify, and exercise professional judgment regarding any output” (Proposed Amendments).
Analysis: Characterizing machine outputs strictly as a “starting point” fails to accurately capture the function of agentic systems that complete end-to-end workflows autonomously. COPRAC will likely revise this language to emphasize the mandatory independent verification of finalized, automated tasks before execution.
Conflict 5: The “Accuracy” Versus “Existence” Verification
2023 Practical Guidance: “A lawyer must review all generative AI outputs, including, but not limited to, analysis and citations to authority for accuracy before submission to the court...” (Guidance, p. 4).
Proposed Rule 3.3 Amendment: The new Comment [3] dictates an “obligation to verify the accuracy and existence of cited authorities, including ensuring no cited authority is fabricated...” (Proposed Amendments).
Analysis: The 2023 guidance requires a general review for accuracy. That’s not likely strong enough anymore. The proposed amendment directly addresses the specific hazard of algorithmic hallucinations by adding an affirmative duty to verify the physical existence of a case. Reviewing a document for typographical or formatting correctness falls short. Practitioners bear a specific ethical obligation to confirm that an autonomous system did not fabricate the cited authority entirely.
Conclusion
The transition from general guidance to binding ethical rules marks a maturing regulatory framework for legal technology. Still, it’s hard to believe that California’s AI Guidance has sat untouched since 2023. So much has happened.
Developments in California—especially regarding technology implementation—carry significant weight for legal professionals nationwide. The state’s proposed amendments establish a strict baseline for the independent verification of agentic systems and continuous informed client consent.
Ethics committees in New York, Illinois, New Jersey, Pennsylvania, and potentially federal agencies like the United States Patent and Trademark Office will likely follow California’s lead. These jurisdictions possess a high probability of adopting similarly rigorous standards for autonomous workflows. Legal practitioners across the country should evaluate these proposals to prepare for future compliance obligations.
IP attorneys, litigators, and in-house counsel face a permanent shift in daily task execution and must adapt internal workflows to satisfy these strict verification and disclosure standards. COPRAC’s upcoming revisions solidify the expectation that human professional judgment dictates all automated processes.
The public comment period remains active until May 4, 2026. Practitioners possess an opportunity to shape the final regulations by submitting feedback through the online Public Comment Form.
COPRAC will present further modifications to the existing Practical Guidance at the May 2026 Board of Trustees meeting.
Disclaimer: This is provided for informational purposes only and does not constitute legal or financial advice. To the extent there are any opinions in this article, they are the author’s alone and do not represent the beliefs of his firm or clients. The strategies expressed are purely speculation based on publicly available information. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney and/or patent professional.




