The European AI Regulation (AI Regulation) came into force on February 2, 2025. The aim of the regulation is to ensure the informed and responsible use of AI and to minimize potential risks. Article 4 of the AI Regulation therefore obliges providers and operators of AI systems to ensure that their staff and contracted third parties have sufficient AI expertise. What this means in concrete terms and what requirements companies have to meet.
Definition of AI competence
According to Article 3 No. 56 of the AI Regulation, AI competence comprises the skills, knowledge and understanding required to use AI systems safely and competently. This includes technical as well as legal and ethical aspects required for the responsible use of AI. AI systems also include tools such as the translation AI DeepL or the AI all-rounder ChatGPT.
At a technical level, AI competence means that employees must have a basic understanding of how AI algorithms work, how data is processed and how models are developed. They must be able to interpret models, validate their results and identify possible sources of error. In addition, they should be familiar with common AI safety methods and how to avoid bias in the training data.
From a legal perspective, AI expertise includes knowledge of regulatory requirements, in particular compliance with the European AI Regulation and other data protection regulations such as the General Data Protection Regulation. Employees should be aware of the liability risks that can arise from incorrect or discriminatory AI decisions.
From an ethical perspective, AI competence includes the ability to critically reflect on the social impact of AI systems. A key ethical issue is potential discrimination through AI-supported decision-making processes, for example in the area of automated applicant selection or lending. Companies must ensure that their AI systems act fairly and transparently in order to avoid discrimination and maintain the trust of users. This also includes implementing mechanisms that make AI decisions comprehensible and explainable.
In summary, AI literacy is an interdisciplinary skill that combines technical, legal and ethical knowledge. Companies should ensure that their employees are regularly trained and follow current developments in AI technology and regulation in order to act responsibly and with an eye to the future.
Specific requirements for companies
Companies must take various measures to ensure the AI competence of their staff:
- Training programs
Companies should develop and implement targeted training courses that are tailored to the specific tasks and level of knowledge of their employees. This training should include comprehensive technical fundamentals, legal frameworks and ethical considerations when dealing with AI. In addition, regular further training measures should be implemented to ensure that employees are familiar with the latest developments and regulatory changes.
- Internal guidelines and standards
It is necessary to create internal company guidelines and standards for the use of AI systems. These should contain binding rules for the development, implementation and use of AI. Particular attention should be paid to transparency, traceability and ethical responsibility. This also includes the definition of clear responsibilities for AI-supported decisions and regular internal audits to check compliance with these standards.
- Documentation and traceability
Careful documentation of the measures taken is essential. Especially in the event of audits or liability issues, companies should be able to prove that they meet the legal requirements. This includes preparing detailed reports on training courses carried out, compliance measures and technical tests of the AI models used. Seamless documentation not only serves to meet regulatory requirements, but also strengthens the internal traceability and control of AI applications.
- Implementation of control mechanisms
Companies should introduce control mechanisms to ensure that AI systems are operated in accordance with legal and ethical requirements. This includes regular reviews of AI models for bias, functionality and safety. External certifications and audits can also help to ensure compliance with regulations and strengthen confidence in AI-supported processes.
- Interdisciplinary collaboration and expert involvement
As AI systems touch on a variety of specialist areas, companies should form interdisciplinary teams of technical experts, lawyers and ethics advisors. These teams should work together regularly to evaluate new developments, further develop internal standards and identify compliance risks at an early stage.
Legal consequences of non-compliance
Although Article 4 of the AI Regulation formulates a general obligation, failure to implement these measures can be considered a breach of the duty of care. In liability cases, courts could examine whether the company has implemented appropriate training and qualification measures. Missing or inadequate training could lead to legal disadvantages, especially when it comes to avoiding liability risks and compliance violations.
In addition to civil law consequences, there may also be regulatory sanctions. The competent authorities have the right to carry out inspections and impose fines in the event of violations of the AI Regulation. The amount of the fines can be considerable and depends on the severity of the breach and the size of the company.
The legal consequences also play a significant role in the area of data protection. If AI systems are used without employees being sufficiently qualified, this can lead to violations of the GDPR, especially if personal data is processed unlawfully or discriminatory decisions are made.
Another risk is liability towards injured third parties. If, for example, an incorrect or inadequately monitored AI decision leads to economic damage or discrimination, those affected can assert claims for damages against the company responsible. In such a case, inadequate training or a lack of competence can be considered negligent behavior.
Reading tip: Generative AI - EDPS publishes guidance for use
AI expertise as a competitive advantage
Article 4 of the AI Regulation presents companies with the challenge of systematically promoting the skills of their employees in dealing with AI. This requires not only technical training, but also a profound change in corporate culture. A responsible and informed approach to AI technologies must be anchored as a strategic priority in order to realize long-term benefits.
The implementation of comprehensive education and training programs enables companies to adapt to technological developments and ensure compliance with regulatory requirements. In addition, a well thought-out AI strategy not only promotes a company's efficiency and innovative strength, but also strengthens the trust of customers, investors and supervisory authorities.
Companies that act proactively and take measures to build competence at an early stage can position themselves as pioneers in an increasingly digitalized and regulated economy. By integrating ethical principles and legal standards into their business processes, they create the basis for sustainable success and long-term competitiveness.
Source: Regulation 2024/1689 laying down harmonized rules on artificial intelligence
Do you want to make your company fit for the AI regulation? We'll tackle it together! Our experts will be happy to advise you. Give us a call or write to us:
Phone: +1 (954) 852-1633
Mail: info@2b-advice.com