General-purpose AI Code of Practice: EU presents new AI code

The EU Commission has presented the "General-Purpose AI Code of Practice", a voluntary code of conduct for GPAI.
Categories:
Picture of  Aristotelis Zervos

Aristotelis Zervos

Aristotelis Zervos, Editorial Director at 2B Advice, combines legal and journalistic expertise in Data protectionIT compliance and AI regulation.

The EU AI Regulation (AI Act) formally entered into force on August 1, 2024. Since February 2, 2025, the first bans on unjustifiable AI practices and AI competence obligations have been binding. Specific GPAI rules ("General-Purpose AI") have been in force since August 2, 2025. The EU Commission has presented the "General-Purpose AI Code of Practice" for this purpose. We provide an overview of this code of practice and explain the benefits it offers.

Background and development of the General-Purpose AI Code of Practice

From August, the AI Regulation will also regulate general-purpose AI (GPAI) models for the first time. This also includes large language models that can be used for a variety of tasks (e.g. ChatGPT).

However, the legislative process revealed that the precise requirements for such GPAI models are complex and that the technology is developing rapidly. The EU therefore decided to develop a supplementary code of conduct for GPAI models. This voluntary code was developed from July 2024 in a multi-stage, multi-stakeholder process led by 13 independent experts. Over 1,000 participants (including AI providers, SMEs, academia, security experts, copyright representatives and NGOs) contributed their feedback in consultations and workshops.

On July 10, 2025, the EU Commission presented the final version of the "General-Purpose AI Code of Practice".

The aim of the code is to help developers and providers of GPAI models to fulfill the new legal obligations of the AI Regulation in a practical way. The aim is to ensure that even the most powerful basic AI models coming onto the market in Europe are safe and transparent.

Reading tip: AI Regulation - EU publishes first guidelines

Voluntariness versus de facto binding effect of the Code

In legal terms, the General-Purpose AI Code of Practice is a voluntary Compliance Guide. It is none legally binding legal standard, but rather a legal framework developed by experts and supported by the EU Commission. Code of Conduct.

The Code itself does not create any new obligations. It merely specifies the obligations for GPAI providers already laid down in the AI Regulation without expanding or tightening them.

Participation is voluntary; no provider is formally obliged to join. However, the Code has a considerable De facto binding effect. On the one hand, it is expected to be officially recognized by the EU Commission by means of an implementing act after examination by the new European AI Office (AI Office) and the AI Board of the Member States. This gives it quasi official status as a more recognized Compliance path. Signing the Code is then regarded as recognized proof that a GPAI provider fulfils its obligations under the AI Regulation. In fact, joining the Code signals to the supervisory authorities that the provider takes the relevant requirements seriously and systematically implements them.

Accordingly Code signatories clear advantages: The Commission emphasizes that voluntary compliance with the Code with Less administrative work and greater legal certainty would go hand in hand. The AI authority will primarily focus its monitoring activities on checking compliance with the Code's commitments rather than auditing each provider individually and comprehensively.

Particularly in the sensitive initial phase of regulation, the Code can therefore serve as a Safe Harbor-like mechanism have an impact. Last but not least, the political support has also led to well-known AI companies such as OpenAI or the European start-up Mistral immediately declaring their intention to join the code. This increases the pressure on the industry to comply with the Code as a best practice to recognize.

However, if a provider does not sign, it must prove its AI Act compliance on its own, which is likely to be associated with greater legal uncertainty and more effort. In practice, the voluntary code thus becomes the Quasi-standardwhich AI officers and compliance officers in companies will use as a guide.

General-purpose AI Code of Practice: transparency, copyright, security

In terms of content, the GPAI Code of Practice is divided into three main chapters: Transparency, Copyright and Security. The first two chapters apply to all providers of GPAI models, while the third chapter only for the most powerful models with systemic risk is relevant. The Code thus takes up the new legal obligations from Article 53 et seq. AI Regulation and substantiates them with detailed measures and procedures.

Transparency and technical documentation

GPAI providers undertake to comprehensively document their models and their properties and to provide information about them. A current technical Documentation which can be submitted to the AI Office and national supervisory authorities on request.

These Documentation should in particular Capabilities and limits of the model and a summary of the training data used. In addition, the provider's contact details must be made publicly available so that downstream providers (who integrate the model into their own AI systems) or authorities can request the necessary information. The Code even provides a standardized Model Cardform to make the transparency information standardized and user-friendly.

It is also important to Quality assurance of the information provided. The Code requires that documentation be carefully reviewed for accuracy, retained as evidence of compliance and protected from unauthorized modification. These transparency requirements are intended to ensure that GPAI models for Third are traceable and trustworthy and enable downstream users to develop their own AI applications in a compliant and secure manner.

Link tip: The General-Purpose AI Code of Practice - The Transparency chapter (PDF)

General-purpose AI Code of Practice and copyright requirements

A central new topic of AI compliance is the handling of protected content in the training and operation of AI models. According to the AI Regulation, GPAI providers must have an internal Copyright compliance policy create and implement (Article 53(1)(c) of the AI Regulation).

The code puts this into concrete terms through comprehensive voluntary commitments to protect intellectual property. For example, the signatories pledge that when training their models only legally accessible content from the Internet. In particular, it is prohibited to circumvent technical protection measures or paywalls in order to access copyrighted material. Websites that are known to offer masses of copyright-infringing material are to be explicitly excluded from the AI providers' web crawlers.

Furthermore, the Code participants commit themselves, Respect the legal reservations of content providers. For example, information in the robots.txt-protocol or other machine-readable opt-out notices with which authors can prohibit the use of their works for text and data mining. If web crawlers are used for training, these protocol instructions must be read and followed.

In addition, the AI providers should Technical precautions to prevent their models from reproducing protected training data unchanged and thus infringing copyrighted works in the output.

In their terms of use, they must also prohibit customers from using the models in a way that infringes copyright.

Finally, the Code requires the establishment of Contact points for rights holdersEach signatory company must designate a contact person and have a procedure in place to ensure that authors Complaint if they suspect that their rights have been violated by the AI model. Artists or publishers, for example, can use such a complaints procedure to submit reports which the provider must check and process.

Overall, the copyright chapter of the Code creates a practical framework for fulfilling the obligation to provide information as set out in the AI Regulation. Protection of copyrights in AI development and to achieve a fair balance of interests between rights holders and AI developers.

Link tip: The General-Purpose AI Code of Practice - The Copyright chapter (PDF)

Safety and risk requirements for systemically relevant models

The third This chapter is aimed at providers of GPAI models that are used as "systemic risk" are categorized. These are the most advanced AI models with the potential to have a particularly large impact on the market and society.

The AI Regulation defines criteria for when a GPAI model poses a systemic risk. For example, if it exceeds Highly scalable, far-reaching capabilities has. One clue is an extremely high computational effort during development (a threshold value of ~10^25 FLOPs for training is assumed). Such models can potentially Significant risks for public safety, fundamental rights or democratic processes.

In this case, the Code requires providers to adhere to an extremely strict Risk management. Specifically, they must set up a comprehensive safety & security framework that corresponds to the current state of science and technology. First of all Systematically identify systemic risks. Providers should analyze what risks their model could entail in various areas of application (such as misuse for disinformation, cyber attacks, mass violations of fundamental rights, etc.). For each identified risk aspect, a In-depth analysis and assess whether the risk can be kept at an acceptable level.

With regard to these criteria Loss minimization measures to implement. In other words, technical and organizational precautions to mitigate any identified systemic risk as far as possible. This includes, in particular, state-of-the-art IT security measures to protect the model and its infrastructure from unauthorized access, data leaks or theft.

The creation of a real Risk culture, i.e. sensitization of all relevant employees to AI risks and continuous improvement of safety processes, is part of the voluntary commitments.

After all, the providers must also be informed about all these activities. Reports and documentation create: For example, the Code provides for a Safety and Security Model Report to the AI Office, in which the provider outlines its risk management for the model. Summaries of the safety measures and analyses should also be published in an appropriate form in order to Transparency to the public.

Link tip: The General-Purpose AI Code of Practice - The Safety and Security chapter (PDF)

General-purpose AI Code of Practice: importance for compliance strategies in companies

In the Compliance strategy of companies developing AI, the GPAI Code of Practice should therefore be anchored as a central building block. Be it by adapting internal guidelines, training employees on the code requirements or setting up task forces for DocumentationIP compliance and AI risk management.

Although participation remains voluntary, in view of the willingness of leading AI companies to participate and the political support already signaled, it is foreseeable that the Code will become the de facto standard. Industry standard advanced, on which responsible align the company.

For AI officers, data protection and compliance officers in companies, the General-Purpose AI Code of Practice offers a concrete Timetablehow the abstract requirements of the AI Regulation can be translated into operational business processes. By complying with the Code voluntarily signed and implementedit can efficiently demonstrate its compliance with the new AI rules.

The competent authorities are signaling that they will provide code signatories with a certain Leap of faith grant: In the initial phase from August 2025, the AI Office wants to work closely with Code participants instead of immediately insisting on sanctions. This ensures that a provider who does not yet perfectly fulfill all voluntary commitments shortly after joining the Code is not immediately treated as a lawbreaker in the first year.

This goodwill arrangement until August 2026 provides companies with an important Transition periodto adapt their internal processes to the extensive new requirements. Compliance officers should use this time to gradually implement the measures provided for in the Code: For example, setting up the prescribed documentation, establishing a copyright policy, introducing risk management systems for AI, etc.

Last but not least, by complying with the Code, companies can also Fines avoid: From the date of full enforcement (2026/27), a Infringement against the relevant AI Act obligations are sanctioned. However, those who follow the Code's measures significantly minimize the risk of such violations and can demonstrate that they have acted to the best of their knowledge and belief in the event of an emergency.

Are you about to implement the AI Regulation?

We provide you with comprehensive advice on the EU AI Regulation, the General-Purpose AI Code of Practice and practical implementation in your company. Whether documentation obligations, risk management or copyright compliance - we support you in the legally compliant and strategic integration of AI.

Contact us now without obligation.

Aristotelis Zervos is Editorial Director at 2B Advice, a lawyer and journalist with profound expertise in data protection, GDPRIT compliance and AI governance. He regularly publishes in-depth articles on AI regulation, GDPR compliance and risk management. You can find out more about him on his Author profile page.

Tags:
Share this post :