AI Act passed! New rules for artificial intelligence in the EU


With the AI Act, the EU member states have passed the world's first law on the regulation of AI. The most important regulations at a glance.

AI Act first AI law worldwide

On May 21, 2024, the Council of the 27 EU member states adopted the AI Act as a uniform framework for the use of artificial intelligence in the European Union. This makes the AI Act the world's first comprehensive set of regulations for artificial intelligence.

"With the AI Act, the EU now has a strong foundation for the regulation of artificial intelligence that creates trust and acceptance in the technology and enables innovations 'made in Europe'," explains the German government in a statement.

The AI Act stipulates that AI applications must not be misused. The protection of fundamental rights must also be guaranteed. On the other hand, science and business need the freedom to innovate. This is why the AI Act follows a so-called "risk-based approach". This means that the higher the risk of an application is deemed to be, the stricter the requirements are.

AI Act recognizes four risk groups

The AI Act therefore divides AI applications into four risk groups:

  • unacceptable risk
  • High risk
  • limited risk
  • low or no risk.

Applications at the lowest risk level can be operated without restrictions. Examples of this are spam filters in email inboxes or AI applications in video games or consumer electronics.

Prohibited AI applications

For example, AI systems that can be used to specifically influence and manipulate people's behavior pose an unacceptable risk. They are prohibited, as is AI-supported "social scoring", i.e. the awarding of points for desirable behavior.

The new regulations also prohibit certain AI applications that jeopardize citizens' rights. These include biometric categorization based on sensitive characteristics and the untargeted reading of facial images from the internet or from surveillance cameras for facial recognition databases. Facial recognition systems in the workplace and in schools will also be banned in future.

Obligations for AI high-risk systems

Certain obligations are also envisaged for other high-risk AI systems, as they can pose a significant risk to health, safety, fundamental rights, the environment, democracy and the rule of law.

High-risk systems include AI systems used in the areas of critical infrastructure, education and training or employment. AI systems that are used for basic private and public services, e.g. in healthcare or banking, in certain areas of law enforcement and in connection with migration and border protection, justice and democratic processes (e.g. to influence elections) are also considered high-risk.

There is a registration requirement for such high-risk AI systems (Art. 49 AI Act).

In addition, EU citizens will in future have the right to complain about decisions made on the basis of high-risk AI systems that affect their rights.

Transparency obligation for AI applications

There is also a transparency obligation. Artificially generated or edited content (audio, image, video) must be clearly labeled as such.

In addition, general-purpose AI systems and the models on which they are based must meet certain transparency requirements. These include compliance with EU copyright law and the publication of detailed summaries of the content used for training.

Additional requirements will apply in future for more efficient models that may harbor systemic risks: For example, model assessments must be carried out and systemic risks must be assessed and mitigated. There is also an obligation to report incidents.

Sanctions against companies for violations

Violations can be sanctioned (Art. 99 Ai Act). Depending on the severity of the violation, fines of up to EUR 35 million or up to 7 percent of the global annual turnover of the previous financial year can be imposed - whichever is higher.

The EU member states must now transpose the AI Act into national law.

Share this post :