Aristotelis Zervos
Aristotelis Zervos, Editorial Director at 2B Advice, combines legal and journalistic expertise in Data protectionIT compliance and AI regulation.
When the second implementation phase of the AI Regulation comes into force on August 2, 2025, the general regulations for providers of general-purpose AI (GPAI) will be activated. We provide an overview of which companies are affected and what regulatory requirements apply to them.
AI Regulation: Affected addressees
The obligations under Chapter V of the AI Regulation apply exclusively to providers of so-called general-purpose AI (GPAI). According to the legal definition in Article 3 No. 44 KI-VO GPAI models are AI models that have been trained with a broad database without being optimized for a specific application and that can be used in different contexts. They often serve as the basis for further specialized applications through Third (so-called downstream users or deployers).
The following players in particular are affected:
- Developers of large language models (LLMs)Companies such as OpenAI, Google, Mistral or Anthropic, which develop basic models such as GPT, Gemini or Claude.
- Open source provider of powerful base modelsCompanies or consortia that develop and distribute models such as LLaMA or Falcon may also be subject to GPAI obligations, especially if the model is publicly available and may create systemic risks.
- SMEs and start-upsif they train GPAI models themselves or further develop and disseminate them on the basis of existing models, for example in a B2B context.
The obligations apply to the "provider" within the meaning of the AI Regulation. In other words, the natural or legal person who places a GPAI model on the market or puts it into operation for the first time (see Art. 3 No. 7 AI Act). Pure users or deployers of GPAI systems at this stage are not directly affected by Chapter V, but may be subject to other regulations (e.g. from Chapter III for high-risk applications).
AI regulation brings new obligations for GPAI providers
The regulation subjects providers of such GPAI systems to specific obligations with immediate effect, which arise in particular from Chapter V (Articles 50 to 56 of the AI Regulation). The most important areas of regulation include
Technical Documentation (Art. 53 AI Act):
GPAI providers must have a comprehensive technical Documentation which, among other things, provide information on:
- of the model architecture,
- the training procedures,
- the composition and origin of the training data,
- the evaluation metrics and their results,
- and model performance in different contexts. These Documentation must be designed in such a way that it enables an assessment of conformity by the competent supervisory authorities.
Transparency obligations and reporting obligations (Art. 50 & 56 KI-VO):
GPAI providers are obliged to publish regular reports on the properties and functionality of their models. These include in particular
- Descriptions of typical application scenarios,
- the functional limits and limitations,
- known systemic risks or potential malfunctions,
- as well as recommended application limits and instructions for Third (so-called "deployers").
Obligations in relation to copyrighted material (Art. 52 para. 1 lit. c AI Regulation):
Providers must disclose whether copyrighted content was used for the training of the GPAI models. They must also explain what measures they have taken to give rights holders the opportunity to assert their rights (e.g. through objection options or transparency mechanisms).
Systemic risks and high-performance models (Art. 51 AI Act):
Models that are considered potentially system-relevant due to their size and computing power (e.g. with a computing power of at least 10^25 FLOPs) are subject to stricter requirements:
- Carrying out risk analyses to determine potential systemic effects,
- Implementation of suitable risk management measures,
- Establishment of a reporting system for serious incidents and vulnerabilities,
- Cooperation with the future AI agency at EU level for risk monitoring.
GPAI providers will face significant regulatory requirements from August 2025, which will require an in-depth documentation and reporting infrastructure as well as clear strategies for dealing with copyright and risk management.
Governance and monitoring
The implementation and enforcement of the AI Regulation requires a multi-layered interaction between national and European institutions, with the active participation of providers and users of AI systems. The central provisions for governance and supervision are set out in Chapter VII AI Ordinance (Art. 59-69) and will also come into force successively.
National supervisory authorities (Art. 59, 60 AI Regulation)
Each Member State is obliged to designate one or more competent national authorities responsible for monitoring the application of the Regulation. These authorities are given the following tasks and powers, among others:
- Market surveillance and conformity checks for GPAI and high-risk AI systems,
- Implementation of audits and access to technical documentation,
- Imposing orders, prohibitions or fines in the event of violations,
- Cooperation with authorities of other Member States and the European Commission.
The supervisory authorities can act both reactively (e.g. in the event of complaints) and proactively (e.g. random checks).
Establishment of AI regulatory sandboxes (Art. 61 AI Regulation)
By August 2026 at the latest, the member states must have at least one so-called regulatory sandbox have set up. These are used for the controlled testing of innovative AI applications under the supervision of the competent authority.
The aim of the sandboxes is to promote innovation and Compliance in harmony. They offer:
- companies a secure framework for developing and testing innovative AI solutions,
- Simultaneous verification of legal conformity by authorities,
- potential early indicators for necessary adjustments to the interpretation of standards or regulatory practice.
SMEs are given preferential access to these facilities.
Conformity assessment and notified bodies (Art. 43 AI Regulation)
For High-risk AI systems a conformity assessment must be carried out before the product is placed on the market or put into service. This can be carried out:
- through Self-assessment on the basis of technical standards (e.g. in the case of internal development under clear specifications), or
- through the involvement of a notified body ("notified body"), for example in the case of complex or safety-critical AI systems.
The declaration of conformity must be documented and it must be possible to submit it to the market surveillance authority at any time on request. GPAI models are generally not subject to formal conformity assessment in accordance with Chapter III, unless they are further processed for high-risk applications.
Sanctions and enforcement (Art. 99 AI Regulation)
The ordinance provides for considerable fines for violations. Depending on the severity and the part of the regulation affected, these range up to:
- 35 million euros or 7 % of global annual sales for serious violations (e.g. against the prohibitions in Art. 5 or against the GPAI obligations in Art. 52-56),
- 15 million euros or 3 % of global annual sales for violations of other regulations,
- 7.5 million euros or 1 % of sales for misrepresentation or non-cooperation.
The sanctions are to be assessed on a case-by-case basis, taking into account the severity, risk of repetition, financial circumstances and cooperation.
Significance of the voluntary Code of Practice (CoP)
On July 10, 2025, the EU Commission published the "General-Purpose AI Code of Practice" which provides companies with concrete templates and best practices for Transparency, security and copyright.
Large providers such as Google and OpenAI have announced that they will sign the code to demonstrate their compliance standards. Others, first and foremost Meta and xAIrejected parts or all of the code.
Although the code is not legally binding, signing it can be seen as an indicator of regulatory reliability, especially during audits or inspections by the authorities.
Reading tip: General-purpose AI Code of Practice - EU presents new AI code
Recommendations for companies
Governance & risk management:
- Implement a governance framework with risk and compliance structures.
- Designate clear responsibilities for analysis, Documentation and incident management.
Transparency & documentation:
- Develop comprehensible technical documentation and transparency reports.
- Reasonable disclosure for Third including explanations of model purpose, data sets and limitations.
Data and copyright compliance:
- Provide verifiable data origins including rights clearance.
- Implement procedures for dealing with copyright infringement complaints.
Training & Awareness:
- Continuation of the Art. 4 training program on AI competence for employees and external stakeholders in place since 2 February 2025
Conclusion
With the August 2, 2025 an operational phase of the AI regulation begins for many companies: especially for providers and users of general-purpose AI. Now is the time to implement transparency obligations, technical Documentationrisk reports and training programs in accordance with the law in order to avoid fines and reputational risks. Voluntarily signing the EU Code of Practice can serve as a strategic safeguard and strengthen the compliance culture. From now on, look ahead to the next important deadline in August 2026.
Link tip: The General-Purpose AI Code of Practice
Are you about to implement the AI Regulation?
We provide you with comprehensive advice on the EU AI Regulation, the General-Purpose AI Code of Practice and practical implementation in your company. Whether documentation obligations, risk management or copyright compliance - we support you in the legally compliant and strategic integration of AI.
Aristotelis Zervos is Editorial Director at 2B Advice, a lawyer and journalist with profound expertise in data protection, GDPRIT compliance and AI governance. He regularly publishes in-depth articles on AI regulation, GDPR compliance and risk management. You can find out more about him on his Author profile page.





