Marcus Belke
CEO of 2B Advice GmbH, driving innovation in privacy compliance and risk management and leading the development of Ailance, the next-generation compliance platform.
On November 19, 2025, the EU Commission adopted the „Digital Omnibus”package. The aim is to simplify and harmonize the EU's digital regulatory framework. Among other things, existing regulations on artificial intelligence (AI), cybersecurity, and data use are to be streamlined so that companies have less bureaucracy and more room for innovation. We present the most important changes.
Extended implementation deadlines for high-risk systems
The AI Regulation takes a risk-based approach and defines high-risk AI systems that are used, for example, in safety-critical areas, education, human resources, or medical devices. The omnibus proposal provides that the obligations for such high-risk applications will take effect later than originally planned. The start of the obligations for AI systems in accordance with Annex III of the AI Regulation (e.g., in the areas of critical infrastructure, education, or human resources management) is to be postponed by almost 1.5 years: from August 2, 2026, to December 2, 2027. For high-risk AI systems under Annex I (AI integrated into already regulated products such as machinery, medical devices, or motor vehicles), the application will be postponed by one year—from August 2, 2027, to August 2, 2028.
These extensions are intended to give companies more time for implementation. However, earlier dates are also possible: as soon as suitable support instruments (standards, Guidelines) are available and the Commission decides to do so, the high-risk requirements will come into force earlier (after a transition period of 6 months for Annex III systems and 12 months for Annex I systems). In any case, fixed end dates apply: No later than December 2027 resp. August 2028 All high-risk obligations must be complied with.
Practical relevance: Legal professionals and compliance teams should adjust their implementation roadmaps for AI projects accordingly. The extended deadlines agreed upon at the urging of several member states (particularly Germany and France) are intended to give European providers some breathing room in the AI race and standardization.
At the same time, planning uncertainty arises, as early application is possible as soon as technical standards are available. Companies must therefore closely monitor the regulation and react flexibly if the Commission activates the high-risk rules earlier. Overall, the postponement alleviates the short-term pressure to implement without compromising the level of protection. This is because all requirements must be met by 2028 at the latest.
Harmonization of risk classification
The omnibus bill also aims to better align the classification of AI risks with data protection principles. In particular, a consistent line is to be drawn when dealing with training data sets, data quality requirements, bias controls, and the traceability of AI systems. The guiding principle here is that developers and operators of AI can orient themselves to a consistent set of standards instead of having to comply with potentially contradictory requirements from AI law and data protection law in parallel. In practical terms, consideration is also being given to relieving AI components in already strictly regulated sectors from double regulation. For example, AI applications that are part of a medical device or machine could be exempted from certain high-risk obligations or classified differently. This would recognize that these areas already have their own supervisory and safety regimes.
Practical relevance: For consulting, this means that industry-specific compliance requirements (e.g., from medical device law or the Machinery Directive) may in future partially overlap with or fulfill the obligations under the AI Regulation. This would reduce duplication of work. On the other hand, the importance of data classification is growing: Companies should clearly distinguish whether they are working with anonymous, pseudonymized, or personal data. This determines which rules apply. A clean Anonymization can significantly reduce risks and obligations. In fact, a more precise distinction between anonymous and personal data is considered crucial. Robust pseudonymization or anonymization of data significantly reduces legal risks. Data protection officers will therefore be even more closely involved in AI projects to ensure that data protection principles such as Data minimization, Earmarking and data quality are incorporated into the risk assessment and classification of AI systems from the outset.
Labeling AI-generated content
A key transparency requirement of the AI Regulation is to label synthetic or AI-generated content as such. This can be done, for example, by means of warnings or watermarks on AI-generated images, videos, or texts. This obligation (Art. 50(2) AI Regulation) remains in place in principle. However, the omnibus proposal grants a transition period. According to this, market surveillance authorities may not impose sanctions for lack of labeling until February 2, 2027. This effectively creates a grace period for providers of generative AI. This also applies to operators of general-purpose AI (LLMs) that generate text, images, or audio content, for example. Companies thus have more time to implement technical labeling solutions, such as digital watermarks or metadata tags that indicate in a machine-readable format that content has been artificially generated or manipulated.
Practical relevance: Providers and users of generative AI (e.g., companies that use AI tools for content creation) are given a reprieve by the deadline extension to comply with their transparency obligations. Compliance officers should use this time to implement and test suitable labeling procedures before fines are imposed from 2027 onwards. For lawyers, the postponement reduces the liability risk associated with AI content in the short term. Nevertheless, it is advisable to proactively address Transparency to gain the trust of users and business partners.
Elimination of internal training requirements
Until now, Article 4 of the AI Regulation has stipulated that providers and users of AI systems must ensure that their employees have sufficient AI skills. This training requirement is now to be completely abolished. Accordingly, Article 4 of the AI Regulation, which has been in force since the regulation was published, would be deleted. Instead of the obligation, there will only be a request to member states to encourage companies to voluntarily train their employees in AI. In other words, the law no longer prescribes internal AI training programs, but leaves it at non-binding recommendations.
Practical relevance: The removal of this obligation provides immediate relief for companies. There is no longer any risk of legal action being taken against them for failing to provide AI training to their employees. Compliance departments can reduce the effort required for mandatory training and shift resources to other areas of compliance. Nevertheless, caution is advised from a practical perspective. Even without a legal obligation, supervisory authorities implicitly expect key personnel to understand the risks and operating requirements of AI systems. Companies should therefore continue to invest in the further training of their employees. Not as a strict compliance requirement, but as part of good governance to prevent the misuse of AI.
Reading tip: AI competence and corporate obligations under Article 4 of the AI Regulation
Simplified documentation and quality management
The AI Regulation requires providers of high-risk AI to maintain comprehensive technical documentation, risk assessments, and a quality management system (QMS) to demonstrate compliance. The original act already provided certain relief for small and medium-sized enterprises (SMEs). The Digital Omnibus now extends these exemptions to so-called „small mid-cap companies“ (SMCs). These are companies with up to 749 employees and an annual turnover of no more than €150 million. In the future, therefore, upper mid-sized companies will also benefit from reduced documentation and monitoring requirements. In addition, it is planned that fines for such companies will be lower than for large companies in the event of violations (a kind of sliding scale of penalties depending on company size).
Practical relevance: This extension of SME privileges means a welcome reduction in compliance costs for many companies. Medium-sized tech companies in particular that have exceeded the SME threshold do not have to immediately comply with all comprehensive documentation requirements in full. For compliance teams, this means less bureaucratic effort in preparing technical documentation and setting up ISO-like quality processes. However, companies that make use of the exemptions should carefully check and document that they comply with the thresholds (number of employees, turnover/balance sheet total). This classification may need to be verified in the event of an audit. In addition, the core objective for privileged companies remains to provide a secure and compliant AI system. The simplifications do not change the responsibility to substantially comply with all high-risk requirements – albeit in a simplified form.
Clarifications regarding transparency and information (GDPR)
In parallel with the AI Regulation, the Omnibus Package improves some transparency requirements in the Data protection, which indirectly affects AI projects, as many AI applications are based on personal data. For example, Responsible persons According to the draft, obviously unfounded or excessive requests from data subjects will be rejected in future or a fee will be charged. Although the burden of proof that the requests are abusive mass requests for information lies with the controller, this defense option responds to the practice of automated form letters, some of which are used as a means of pressure.
In addition, an exception to the Duty to inform reach for when personal data were collected in a clearly defined relationship between the person and the responsible party, and it can be assumed that the person already possesses the essential information (e.g., in the employment context or in the case of existing customers who have already been informed).
Practical relevance: For data protection officers and legal departments, this means less work when processing requests from data subjects. In the future, companies will be better able to defend themselves against „information trolls” or repeated, irrelevant requests, saving valuable resources. This is particularly relevant in AI projects, which often process large amounts of data. The focus can be on substantial Transparency be imposed instead of having to comply unconditionally with every formally possible request. However, it is important to clearly justify and document any rejection or chargeable costs in order to stand up to scrutiny by supervisory authorities in case of doubt. The exception to this is Duty to inform also prevents double instruction in situations where Affected parties already know the purpose of the data processing. This may be relevant, for example, in the case of small, manageable AI applications in employment relationships. Overall, these adjustments promote more efficient Transparency, that continues to provide meaningful insights to those affected without overburdening companies with formalities.
Outlook: When are the new AI rules set to come into force?
The formal proposal from the EU Commission was presented on November 19, 2025. It will then go through the standard EU legislative process: the European Parliament and the Council of Member States will discuss the draft and finally adopt it. Further adjustments to the proposals are likely, as controversial discussions are already underway and data protectionists are warning against a watering down of data protection.
It is not currently possible to give a definite date for when the new rules will come into force. Optimistic scenarios assume that the reform could be finalized by the end of 2026. It could then come into force in 2027 at the earliest, depending on the progress of negotiations.
Ailance AI Governance is ready for AI compliance
Even if the Digital Omnibus postpones deadlines and relaxes certain documentation requirements, the core of the AI Regulation remains unchanged.
Companies need to know, where AI is running, Which data be processed, what risks exist and like models are monitored. It is precisely these Transparency cannot be made up for at the last minute.
Ailance AI Governance helps to implement these requirements in a structured manner and without friction losses: A centralized AI inventory, automated risk assessments, integrated model maps, and audit-proof approvals. All in a single workflow that reduces the burden on teams and makes audits easier to plan.
We would be happy to show you how Ailance AI Governance helps companies manage AI securely and compliantly.





