The AI Act came into force in the European Union on August 1, 2024. The regulations will now be implemented over a period of 36 months. The AI Act timetable at a glance.
AI regulation applies in stages over 36 months
The European Union (EU) has made considerable efforts in recent years to regulate the development and use of artificial intelligence (AI). A central element of these efforts is the so-called "AI Act". This act is intended to ensure that AI systems in the EU are safe and transparent and that the fundamental rights of citizens are protected. The AI Regulation is the world's first comprehensive set of rules for AI.
After the regulation was published in the Official Journal of the EU on July 12, 2024, it came into force 20 days later - on August 1, 2024. The applicability of the regulations is now staggered over a period of 36 months.
According to Art. 113 KI-VO, it applies from August 2, 2026.
Link tip: Artificial Intelligence Act
6 months after entry into force: End for unauthorized AI systems
Chapters I and II of the AI Regulation will already apply from February 2, 2025. In particular, the bans on certain AI systems that are classified as particularly critical will apply from this date. These are AI systems that pose a clear risk to the fundamental rights of humans. According to the EU Commission, this applies, for example, to systems that enable authorities or companies to evaluate social behavior (social scoring).
Anyone who develops, operates or sells AI applications classified as critical must discontinue or withdraw them from the market from February 2, 2025.
12 months after entry into force: GPAI regulation and sanctions
From August 2, 2025, the following regulations will apply in accordance with Article 113 of the AI Regulation:
- Notified bodies (Chapter III, Section 4),
- GPAI models (Chapter V),
- Governance (Chapter VII),
- Confidentiality (Article 78),
- Sanctions (Articles 99 and 100)
This cut-off date is much more relevant for companies, as the rules for general purpose AI (GPAI) models will apply from this date. This also includes large language models (LLMs).
The AI Act obliges the providers of such models to disclose certain information to downstream system providers. According to the EU Commission, such transparency enables a better understanding of these models. Model providers must also have strategies in place to ensure that they comply with copyright law when training their models.
The EU Commission also assumes that general-purpose AI models trained with a total computing power of more than 10-25 FLOPs pose systemic risks. Providers of models with systemic risks are required to assess and mitigate risks, report serious security incidents, conduct state-of-the-art testing and model assessments and ensure the cybersecurity of their models.
Reading tip: AI Act passed! New rules for artificial intelligence in the EU
24 months after entry into force: Provisions of the AI Act apply - with one exception
According to Article 113, the AI Regulation applies from August 2, 2026, with the only exception of high-risk AI systems in accordance with Article 6(1).
From August 2026, the Regulation will therefore also cover high-risk AI systems in accordance with Annex III.
Annex III covers eight areas in which the use of AI can be particularly sensitive. It also contains specific use cases for each area. An AI system is classified as high-risk if it is to be used for one of these use cases.
According to the EU Commission, examples of this are
- AI systems that are used as safety components in certain critical infrastructures, e.g. in the areas of road traffic and water, gas, heat and electricity supply;
- AI systems used in education and training, e.g. to assess learning outcomes, manage the learning process and monitor cheating;
- AI systems used in employment and HR management and access to self-employment, e.g. for targeted job advertisements, for analyzing and filtering job applications and for evaluating applicants;
- AI systems used for access to essential private and public services and benefits (e.g. healthcare), for assessing the creditworthiness of natural persons and for risk assessment and pricing in connection with life and health insurance;
- AI systems used in law enforcement, migration and border control, where not yet prohibited, as well as in the administration of justice and democratic processes;
- AI systems used for biometric identification, biometric categorization and emotion recognition, unless prohibited.
Before providers a high-risk AI system on the market in the EU or otherwise put it into operation, they must submit it to a Conformity assessment undergo. This allows them to demonstrate that their system meets the mandatory requirements for trustworthy AI (e.g. data quality, documentation and traceability, transparency, human oversight, accuracy, cybersecurity and robustness). This assessment must be repeated if the system or its purpose is significantly changed.
36 months after entry into force: High-risk AI systems in accordance with Annex I
From August 2, 2027, Article 6 paragraph 1 and the corresponding obligations shall apply. This concerns high-risk AI systems in accordance with Annex I.
Accordingly, AI systems can also be classified as high-risk if the AI system is embedded as a safety component in products that fall under the existing product regulations (Annex I) or form such products themselves.
This could be AI-based medical software, for example.
Conclusion: Companies and other affected stakeholders should familiarize themselves with the requirements of the AI Act at an early stage and make the necessary adjustments in order to be compliant in good time. The staggered introduction gives some time to prepare, especially for more complex high-risk AI systems. Companies should have a precise overview of what data they use in their AI systems. This is particularly important in order to meet the accountability and documentation requirements of the AI Act. Failure to do so could result in severe penalties.