EU AI Act: GPAI Code of Practice Arrives, IP Questions Linger
Europe's Framework May Concern AI Innovators
The European Commission has released the much-anticipated General-Purpose AI (GPAI) Code of Practice, a set of documents designed to guide the industry in complying with the landmark AI Act.
Published on July 10, 2025, the Code offers a voluntary framework for providers of general-purpose AI models, outlining commitments on transparency, copyright, and safety and security.
While the Code marks a big, early step in regulating artificial intelligence, IP owners and legal practitioners are examining its provisions, noting potential gaps in protection for intellectual property.
The Code of Practice aims to serve as a "guiding document for demonstrating compliance" with key obligations in the AI Act. However, officials are quick to point out that "adherence to the Code does not constitute conclusive evidence of compliance."
For IP stakeholders, this distinction is crucial, as the Code’s effectiveness will depend on robust implementation and the yet-to-be-defined "state-of-the-art" technologies it frequently references.
Transparency: A Limited View into the Black Box
The Transparency Chapter aims to ensure downstream AI system providers and regulators have a sufficient understanding of the models they use. It introduces a "Model Documentation Form" that requires providers to disclose a range of technical details.
However, a critical distinction is made between information available to downstream providers versus that reserved for the AI Office and national competent authorities. While downstream providers will receive information on intended uses, model architecture, and license types, more sensitive information about the training data remains less accessible.
According to the AI Act, providers must "draw up and make publicly available a sufficiently detailed summary about the content used for the training" of their models. Yet, the most granular details in the Model Documentation Form—such as data provenance, specific curation methodologies, and measures to detect biases—are earmarked for regulators, not the public or even direct business partners.
This lack of public, detailed transparency into training datasets remains a major hurdle for IP owners seeking to determine if their works were used without permission.
The Copyright Chapter: A Framework of Policies and Protocols
A central focus for IP owners is likely the Copyright Chapter, which elaborates on the AI Act's requirement for model providers to "put in place a policy to comply with Union law on copyright and related rights."
Key measures include:
Lawful Data Acquisition: Providers commit not to circumvent technological measures like paywalls to access training data. They also pledge to exclude websites that EU courts or authorities have recognized as "persistently and repeatedly infringing copyright" from their web-crawling activities. A publicly available EU list of such sites is planned to support this effort. IP owners may question the agility and comprehensiveness of such a list, which appears more reactive than preventative.
Respecting Rights Reservations: The Code mandates that providers employ web crawlers capable of reading and respecting the robots.txt protocol. More significantly, it acknowledges the need for "other appropriate machine-readable protocols to express rights reservations." The development of these new standards is anticipated through "bona fide discussions to be facilitated at EU level with the involvement of rightsholders, AI providers and other relevant stakeholders." For many creators, this signals that existing mechanisms are insufficient and places a heavy burden on future collaboration to yield a workable solution.
Mitigating Infringing Outputs: Providers are committed to implementing "appropriate and proportionate technical safeguards" to prevent models from generating infringing content. The ambiguity of what is "appropriate and proportionate" leaves significant discretion to providers and may not assuage fears that infringing outputs will remain a persistent issue.
Complaint Mechanisms: The Code requires providers to establish a point of contact and a formal mechanism for rightsholders to lodge complaints. While a necessary backstop, this places the onus of monitoring a vast digital space and initiating action squarely on the shoulders of IP holders.
Security for Systemic Risk Models: Protecting IP Assets
For the most powerful GPAI models deemed to pose "systemic risk," the Safety and Security Chapter imposes stringent obligations. While the primary goal is preventing large-scale societal harm, these measures have direct implications for protecting the model itself as a valuable IP asset.
Commitment 6 focuses on implementing "an adequate level of cybersecurity protection for their models and their physical infrastructure" to prevent "unauthorised releases, unauthorised access, and/or model theft."
This is a critical concept for any entity treating its model weights, system prompts, and architecture as a trade secret, let alone protecting users and their data. The appendices detail specific security objectives, including:
Protection of Model Parameters: Establishing a secure registry of where model parameters are stored, ensuring end-to-end encryption, and utilizing confidential computing where appropriate.
Hardening against Insider Threats: The Code acknowledges risks from internal actors and calls for measures like background checks for personnel with access to model parameters and sandboxing models to prevent self-exfiltration.
The framework's reliance on "independent external security reviews as appropriate" suggests that the rigor of these protections will vary, a point of potential concern for businesses built around proprietary AI.
A High Bar for Compliance and Cost in Europe
For developers of the most powerful models—those with "systemic risk"—the Code establishes a demanding and costly compliance regime. These companies must design, implement, and continuously update a comprehensive "Safety and Security Framework," conduct extensive pre-market risk assessments for any new model or significant update, and allocate substantial resources to governance and reporting.
While the Code mentions "simplified ways of compliance for SMEs, including start-ups," the fundamental obligations represent a high barrier to entry that could disproportionately burden smaller innovators compared to global tech giants.
The Data Dilemma and Development Delays
A critical factor in AI development is access to vast amounts of training data. The Code’s Copyright Chapter commits providers to honoring machine-readable rights reservations, such as the ‘robots.txt’ protocol. This respect for copyright, while essential, may limit the pool of available training data for European firms compared to competitors in regions with more permissive fair use doctrines.
Furthermore, the rigorous, multi-stage risk assessment process can slow down development cycles. The requirement to complete a full risk assessment before a modified model can be deployed could hinder the rapid, iterative style that characterizes the AI industry.
This structured timeline may put European companies at a disadvantage, allowing competitors outside of the EU to deploy new features and capture global market share more quickly.
The Competitive Advantage of Trust
The counterargument is that the Code’s rigorous standards will cultivate a market for "human-centric and trustworthy artificial intelligence," turning a regulatory burden into a competitive edge.
The rules apply to any company placing a model on the EU market, regardless of its origin, creating a level playing field within one of the world's largest economies. For enterprise clients where safety, legal certainty, and reliability are paramount, AI products forged in the EU’s regulatory crucible could become the gold standard.
Conclusion
The GPAI Code of Practice represents a primary effort to translate the principles of the EU AI Act into actionable industry standards. Ultimately, the GPAI Code of Practice charts an ambitious but challenging course for AI companies entering Europe.
However, the potential for increased costs, restricted data access, and slower development cycles presents a tangible risk of holding European AI companies back in a fiercely competitive global race. Under these rules, the European public may get delayed access to some AI tools, or be skipped completely.
The framework places a heavy reliance on future technological solutions and collaborative standard-setting, while the immediate burden of policing for infringement remains largely with the content creators themselves. Its effectiveness likely hinges on interpretations of ambiguous terms like "appropriate" and "state-of-the-art."
The final verdict will depend on whether the world's markets value this commitment to safety enough to make the "Made in Europe" label a mark of distinction, not a disadvantage in the AI race.
Disclaimer: This is provided for informational purposes only and does not constitute legal or financial advice. To the extent there are any opinions in this article, they are the author’s alone and do not represent the beliefs of his firm or clients. The strategies expressed are purely speculation based on publicly available information. The information expressed is subject to change at any time and should be checked for completeness, accuracy and current applicability. For advice, consult a suitably licensed attorney and/or patent professional.