With the entry into force of the European Artificial Intelligence Regulation (AI Regulation) on August 1, 2024, companies will gradually be confronted with new legal requirements. The first binding regulations will come into force as early as February 2025, particularly with regard to prohibited AI applications. Companies are also called upon to deal with the effects of this regulation at an early stage and take the necessary compliance measures.
Prohibited AI systems in accordance with the AI Regulation
From February 2025, the use of certain AI systems will be expressly prohibited in the EU. This regulation is based on the requirements of the AI Regulation, which aims to minimize potential risks to fundamental rights, security and democracy. In particular, systems that can manipulate, discriminate or monitor people on a mass scale are to be excluded. This regulation applies in particular to applications that are classified as especially risky. These include, among others:
- Manipulative AI technologies:
Systems that unconsciously influence people's behavior or decision-making, especially when this can lead to harm. - Social scoring systems:
AI-supported assessments of natural persons based on their behavior, social status or personal characteristics, which can have a potentially discriminatory effect. - Mass surveillance through biometric recognition:
The use of real-time facial recognition in public spaces is prohibited, with a few exceptions, such as for counter-terrorism or to combat serious crime. - Emotion recognition at the workplace or in educational institutions:
AI systems that analyze the emotional state of employees or students may no longer be used. - Predictive policing on a purely statistical basis:
Systems that are based solely on probability calculations to predict future behavior are prohibited.
Companies that already use such systems or are involved in the development of such technologies must ensure by February 2025 that they comply with the new regulations and, if necessary, discontinue prohibited applications.
AI law: need for action for companies
In addition to complying with the bans that will apply from February 2025, companies should respond proactively to the upcoming requirements of the AI Regulation. One example of a successful compliance strategy is the implementation of an internal AI governance framework that defines clear responsibilities, carries out regular risk analyses and ensures that all AI systems used comply with the new legal requirements. This includes in particular
- An inventory of all AI applications used in the company in order to identify potential breaches at an early stage.
- Checking whether AI systems used or developed are to be classified as high-risk AI. Extensive documentation and compliance requirements apply to these systems in later implementation phases of the AI Act.
- Adaptation of internal guidelines, particularly in the areas of data protection and ethics, in order to be prepared for future transparency and security requirements.
- Establishment of internal compliance structures to monitor and comply with regulatory requirements.
The establishment of internal compliance structures is an essential part of the corporate strategy to ensure that all legal requirements of the AI Act are met. This includes implementing a structured and systematic approach to monitoring and controlling AI applications. Companies should appoint an internal compliance department or responsible persons who are responsible for the regular review and evaluation of the AI systems used.
This includes the introduction of risk analysis mechanisms to identify and prevent potential violations at an early stage. Companies should also set up internal reporting channels for ethical or regulatory concerns so that employees can report potential problems anonymously or directly to a compliance office.
Mandatory AI training for employees
Another important aspect of internal guidelines is the implementation of clear responsibilities and processes for dealing with AI applications. This can include the appointment of an AI officer who ensures that the use of AI complies with regulatory requirements.
In addition, companies should conduct regular internal training courses for employees, as the teaching of AI skills will also be mandatory from February 2025.
Specifically, Art. 4 of the AI Regulation states: "Providers and operators of AI systems shall take measures to ensure, to the best of their ability, that their personnel and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI competence, taking into account their technical knowledge, experience, education and training and the context in which the AI systems are intended to be used, as well as the persons or groups of persons with whom the AI systems are intended to be used."
The traceability and explainability of AI decisions should therefore also be guaranteed. So that those affected can also understand how and why a certain decision was made.
Reading tip: Generative AI - EDPS publishes guidance for use
Concrete measures for implementing the AI Regulation
The AI Regulation represents an important regulatory framework for the use of artificial intelligence in the European Union. Small and medium-sized enterprises (SMEs) in particular could face challenges, as they often do not have the same resources for compliance measures as large corporations. In order to remain competitive, SMEs should therefore rely on cost-efficient solutions such as standardized compliance tools or cooperation with external consultants at an early stage.
Companies are well advised to prepare for upcoming obligations at an early stage in order to minimize risks and meet legal requirements in good time. An early compliance strategy can not only minimize legal risks, but also secure competitive advantages and strengthen the trust of customers and partners. Concrete measures include
- Employee training: Regular training on new regulatory requirements and ethical guidelines.
- Internal audits and controls: Implementation of control mechanisms to ensure compliance.
- Transparency and documentation: Providing evidence of the AI systems used, their decision-making processes and the safety measures taken.
- Cooperation with external experts: Involvement of legal advisors and data protection experts to minimize regulatory risks.
- Regular updating of the AI strategy: adaptation to changing regulatory requirements and technological developments.
Do you want to make your company fit for the AI regulation? We'll tackle it together! Our experts will be happy to advise you. Give us a call or write to us:
Phone: +1 (954) 852-1633
Mail: info@2b-advice.com
Source: Regulation 2024/1689 laying down harmonized rules on artificial intelligence