On February 2, 2025, the European AI Regulation (EU Artificial Intelligence Act, KI-VO) has partially entered into force. This regulation creates binding EU-wide rules for the use of artificial intelligence for the first time. One of the first applicable obligations concerns the AI competence from Article 4 of the AI RegulationCompanies and public sector bodies must ensure that all persons who develop, operate or use AI systems have sufficient knowledge and skills in dealing with AI. The EU Commission has now published a FAQ on Article 4 of the AI Regulation, which is intended to clarify some ambiguities.
Legal basis: Article 4 of the AI Regulation and definitions
Article 4 of the AI Regulation ("AI competence") obliges providers and operators of AI systems to, "use their best endeavors to ensure that their personnel and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI competence". Simply put, any company or government agency that develops AI systems or must ensure that its own employees and other involved persons (such as external service providers) have sufficient knowledge of how to use these AI systems. This includes everyone involved along the AI value chain, regardless of the size of the organization. In addition to AI providers (manufacturers/developers), AI operators (users of AI systems in a professional context) are also included. As a result, almost all companies and public bodieswho use AI technology. Even retailers or importers can be affected if they practically use AI systems or integrate them into their operations.
Who falls under "other persons handling AI systems on their behalf"?
Article 4 of the AI Regulation expressly refers not only to the company's own staff, but also to persons who "in the name" of the provider or operator of AI systems. This refers, for example, to external contractors, service providers or customerswho carry out activities with AI systems for the organization. Although these groups of people are not in an employment relationship, they are under the organizational responsibility of the AI user. In practical terms, this means that if a company outsources the operation of an AI system to an IT service provider, for example, it must ensure that the employees there are also appropriately qualified (possibly contractually secured).
Definition of "sufficient AI competence"
The AI Regulation provides a legal definition of the term in Article 3 No. 56 "AI competence" ("AI literacy"). Accordingly, AI competence comprises all "skills, knowledge and understanding that enable providers, operators and data subjects to deploy and use AI systems in an informed manner, taking into account their rights and obligations in the context of this Regulation, and to gain an awareness of the opportunities and risks of AI and potential harm". Specifically, the following aspects can be derived from this:
- Professional skills and knowledge: Knowledge about it, what AI is, how AI systems work (e.g. basics of machine learning) and which AI systems are used in their own organization. This also includes an understanding of the technological functionality and the concepts behind AI - such as data basis, training methods, model limits - in order to make informed decisions about its use and handling.
- Awareness of opportunities and risks: Understanding the Possibilitiesthat AI offers in the specific field of application. But also the Risks and potential damagethat can be caused by AI systems. For example, trained personnel should recognize the risk of wrong decisions, discriminatory tendencies or "hallucinations" of generative AI (invented outputs). This risk awareness enables them to take appropriate Protective and countermeasures to take.
- Technical, legal and ethical knowledgeAI competence goes beyond purely technical training and also includes a basic knowledge of the legal framework. This includes the KI-VO itself, but also relevant regulations such as the GDPR. In addition ethical principles in the AI context. For example, employees should understand the transparency and supervisory obligations of the AI Regulation (Articles 13, 14), but also ethical Guidelines (e.g. on dealing with bias or human-centered AI).
- Practical application expertiseAfter all, AI competence requires the Ability to put the acquired knowledge into practice. Employees should be able to operate the AI systems competently on a day-to-day basis, classify results correctly and react correctly when problems arise. This includes, for example, critically examining AI output instead of accepting it without reflection. Or preparing input data in a quality-assured manner and carrying out human checks ("human-in-the-loop") if necessary.
FAQ of the EU Commission on Article 4 of the AI Regulation
In a FAQ, the EU Commission has clarified that for Article 4 AI Regulation No rigid specifications on training content or formats. Rather, a certain degree of flexibility is required, as AI technologies undergoing rapid change and cover very different fields of application. However, the Commission mentions some Minimum aspectsthat an AI training initiative should cover in order to meet the requirements of Article 4:
- Basic understanding of AI within the organization - "What is AI? How does it work? Which AI systems are used in our company? What are the opportunities and risks?". All relevant employees should be given a common foundation of basic AI knowledge, including knowledge of the AI applications used in the company and their possible effects.
- Consider the role of your own organization - Are you an AI provider (i.e. do you develop AI systems yourself) or an AI operator (do you use external AI systems)?. The content of the training depends on this. An AI developer needs in-depth knowledge of model development and data, while a pure AI user should focus on the correct use and limitations of the tool.
- Risk analysis of the AI systems used - "What risks do our AI systems pose? What do employees need to know in order to recognize and mitigate these risks?". Depending on Risk level of the AI (normal or High-risk AI in accordance with the AI Regulation), the training must address possible risks of error, bias, security gaps or liability issues more intensively. Staff who work with a high-risk AI system (e.g. an AI for personnel selection or medical diagnostics) need more detailed training than those who use a low-risk assistance system.
Customized qualification measures
Based on this analysis, the specific Tailor-made training measures be made. These are "Differences in technical knowledge, experience, education and training of employees and other persons" as well as "the context in which the AI systems are used and the people on whom they are used".
In practical terms, this means The existing know-how of the individual target groups must be ascertained (e.g. previous experience with AI, specialist background) so as not to over- or underchallenge anyone. At the same time, the Context of use must be taken into account: For example, the industry and function for which the AI is used, as well as the affected Customer group or population.
For example, an IT expert requires different (more in-depth) training content than a clerk who only uses AI software. Similarly, AI systems in the medical field, for example, have different requirements (e.g. on patient safety) than those in the Marketing.
Risk-based approach in Article 4 of the AI Regulation
The obligation under Art. 4 of the AI Regulation is consciously scalable designed. Organizations must align their AI competence program with the own risk profile adapt. If a company only has Low-risk AI tools (e.g. a simple AI text assistant in the Marketing), manageable training measures are sufficient to ensure proper use. However, if a company has High-risk AI systems in use (see definition in Art. 6 and Annex III AI Regulation, e.g. AI systems for applicant selection or in personnel management), so "additional measures are likely to be relevant to ensure that employees know how to handle the system in question and how to avoid or mitigate the risks". In such cases, Art. 26 of the AI Regulation already expressly requires operators to ensure that their staff have sufficient trained to operate the high-risk system correctly and exercise human supervision. Simply handing over the manual to the employees is by no means sufficient.
Format and methods of AI training in companies
Article 4 of the AI Regulation does not prescribe a specific format - such as face-to-face seminars vs. e-learning. And also a Rigid "minimum number of hours" is sought in vain. The Commission emphasizes that it "No one-size-fits-all solution" and the AI Office will not impose a specific type of training. Companies can therefore choose from a variety of approaches: traditional seminars, workshops, webinars, practical on-the-job training, tutorials, internal knowledge bases or regular coaching sessions.
In many cases, a Mix of measures can be useful in order to appeal to different types of learners and ensure continuous professional development. It is important that the chosen formats are effective. The Commission notes that a purely passive approach ("Have employees read") is usually not sufficient and Active training and guidance are required. Ultimately, the decisive factor is that the Learning objectives - the acquisition of the skills described above. The concrete implementation should be based on the Initial situation and the needs of the target groups Orientation.
Article 4 AI Regulation: Examples and best practices
To support organizations in designing their AI training programs, the EU Commission has developed a "Living repository" of AI literacy practices has been launched. This freely accessible online database contains practical examples and initiatives from a wide range of providers and operators - from different sectors, sizes and countries - on how they approach AI training. The examples show, among other things, how training concepts are designed for different groups of employees (from developers to administrative staff) and how external partners (e.g. suppliers, customers) can be included. Although the mere copying of these examples does not automatically guarantee legal conformity, the collection is intended to serve as a guide. Learning and exchanging stimulate. The Commission maintains the database on an ongoing basis and reviews submitted practical examples for Transparency and reliability before they are publicly listed.
In addition, the AI Office as part of the "AI Pact" Webinars, workshops and community events where stakeholders can exchange experiences and obtain expert knowledge. Overall, the topic of AI expertise is very dynamic; further Guides and Awareness measures have been announced or are under development, including a dedicated website on AI skills and specialists. Organizations would do well to keep an eye on these developments and benefit from new insights and resources.
Reading tip: AI regulation - this will apply to companies from February 2025
Documentation and organizational embedding
Although Article 4 does not explicitly Documentation obligation training measures, it is advisable from a compliance point of view to carry out the training internally. to be recorded in writing. The EU Commission confirms that no formal certificate is required and that it is up to the organizations to provide evidence. In practice, it is recommended, Participant lists, certificates or internal training certificates and keep it on file. In the event of an official inspection or even a legal dispute at a later date, the company will be able to document the measures taken to comply with Article 4.
In contrast to data protection law, AI law has so far No obligation to appoint an AI officer or similar (in other words: a "AI Officer" is not required by law). The EU Commission has clarified that for Art. 4 No special governance structure is prescribed. Companies can decide for themselves how they anchor responsibility internally. In practice, however, many larger organizations are considering establishing some kind of AI managers or a AI competence team to coordinate the new requirements. This can be helpful to centrally manage the training program, ensure regular updates and serve as a point of contact for employee questions. It is conceivable, for example, to assign this role to the compliance or data protection officer or to create new positions such as a "AI Officer" Some private certifications are already offered for this purpose. Unlike the data protection officer, this is legally binding under the GDPR - However, not.
Enforcement and sanctions
The obligation under Art. 4 AI Regulation has been in force since February 2, 2025 binding. Companies and authorities can therefore not claim that the regulation has not yet been implemented. De jure the training obligation is in force. However, the legislator has set certain transitional periods for the Regulatory enforcement provided for. The monitoring and sanctioning of violations is the responsibility of the national market surveillance authoritieswhich must be designated by each EU member state by August 2, 2025. These authorities (the Federal Network Agency is envisaged for Germany) are then to August 3, 2026 start with active control and enforcement. Companies must already now schools, but government controls or fines are not expected until August 2026, when the authorities begin their work.
This staggered entry into force raises the question of whether a company that remains inactive until 2025/26 need not fear any sanctions. Caution is required here: It is true that before the national sanctions provisions come into effect, companies may in fact No official penalties be imposed, but such a wait-and-see approach would be risky. The EU Commission emphasizes that the obligations - in particular Bans on certain AI practices - apply from 2025 and coordinated application of the rules by the AI Board (the body of supervisory authorities) is ensured. Companies that only become active when the supervisory authority is on the doorstep would then already have committed a breach of duty.
Possible sanctions for violations
Chapter 12 of the AI Regulation itself contains general provisions on sanctions (Art. 99 et seq.). However, it leaves it up to the Member States, Specific criminal offenses and fines for infringements. Unlike, for example, breaches of certain high-risk obligations (where the EU legislator specifies maximum penalties of up to 6% of annual turnover), Article 4 of the AI Regulation is not explicitly mentioned in the EU catalog of fines. Whether and to what extent, for example, a Fine for inadequate AI training depends on national implementation. To date (as of May 2025), Germany has not yet No specific law which sanctions non-compliance with Article 4. It is to be expected that corresponding provisions will be created in the planned German AI Implementation Act, possibly as part of general supervisory measures or reporting requirements.
Irrespective of formal fines, the following applies to any sanction proportionate must be. In individual cases, the supervisory authorities will consider factors such as "the nature and gravity of the infringement and its intentional or negligent character" into account. For example, it will make a difference whether a company has not taken any precautions at all or whether it has implemented training but these were perhaps incomplete. The Commission indicates that a sanction is particularly likely in such cases, "if an incident is demonstrably due to a lack of appropriate training or guidance". Example: If the incorrect operation of an AI machine by untrained personnel leads to a reportable safety incident, the authorities are likely to intervene. If there is no concrete damage, a request for rectification is more likely than draconian penalties.
Civil and criminal consequences for Infringement against KI-VO
In addition to official sanctions, the private enforcement play a role. The AI Regulation itself does not create a new basis for liability under civil law or even a claim for damages. However, the general civil liability rules may apply: If a third party is harmed due to inadequate employee qualifications, this can be considered a Infringement against traffic safety obligations or general duties of care. For example, a patient could sue if a hospital uses AI diagnostic software without sufficient training for the medical team, resulting in misdiagnosis damage. Or an employee could claim employer liability if they make a mistake that causes financial damage due to a lack of AI training. In such cases, it could be argued that the company has breached its obligation under Article 4 of the AI Regulation and has therefore also breached its duty of care under civil law, which could give rise to claims for damages. Companies should take this Liability risk take seriously.
Criminal law There are currently no explicit provisions for consequences. Art. 4 of the AI Regulation does not establish a criminal offense and the Regulation does not create a direct basis for employer liability under criminal law. Theoretically, however, particularly gross violations (such as a complete lack of safety precautions in high-risk AI with subsequent personal injury) could fall under existing national criminal offences (e.g. negligent bodily injury due to organizational fault). However, these would be exceptional cases. The focus is clearly on the preventive and administrative enforcement.
Source: AI Literacy - Questions & Answers (European Commission)
Conclusion: EU FAQ on AI competence leaves many questions unanswered
Article 4 of the AI Regulation moves the Human factor at the center of AI regulation. In addition to technical requirements for AI systems, the legislator deliberately relies on Education and empowerment of those who work with AI. This is a preventative approach: well-trained users are more likely to operate AI systems correctly, know their limits and recognize risks at an early stage. In this way, incidents and rule violations can be avoided before they occur,
The previously published FAQ and Guidelines of the EU show that the focus is on cooperation and guidance rather than pure sanctions. This is why the recently published FAQ on AI competence is not intended to be exhaustive - quite the opposite. The coming months and years will see further Clarifications and best practices bring. It is already becoming apparent that the topic of AI expertise will continue to grow. further developed becomes. This is also better for the dynamics.
Act now: Making your organization fit for the future of AI
The requirements of Article 4 of the AI Regulation are not a pipe dream - they already apply. Companies and authorities are now called upon to systematically strengthen the skills of their employees in dealing with artificial intelligence.
Take advantage of this opportunity:
Check your current training status
Develop a customized training concept
Secure your Compliance through targeted qualification
Do you need support in developing and implementing an AI competence program?
Contact our experts for Data protection, Compliance & AI strategy: We advise you individually and support you in the legally compliant and practical implementation of the KI-VO in your company.





