Serious AI incidents: Planned EU guidelines on reporting obligations set new standards for AI compliance

Reporting obligations for serious AI incidents
Categories:
Picture of  Aristotelis Zervos

Aristotelis Zervos

Aristotelis Zervos, Editorial Director at 2B Advice, combines legal and journalistic expertise in Data protectionIT compliance and AI regulation.

On September 26, 2025, the European Commission published a public Consultation to Guidelines and a reporting form for "serious AI incidents" in accordance with the AI Regulation (EU) 2024/1689. The aim is to clarify the handling of serious incidents involving high-risk AI systems before the reporting obligations become mandatory from August 2026. The initiative supplements the AI Regulation with practical guidelines for providers and operators.

Legal framework: Article 73 of the AI Regulation

According to Article 73, providers of high-risk AI systems are obliged to report serious incidents to the national market surveillance authorities without delay. A "serious AI incident" includes, in particular, events that

  • the death or serious impairment of a person's health,
  • a significant disruption of critical infrastructures or
  • cause or are likely to cause serious violations of fundamental rights.


Operators must also report in certain cases. For example, if they have security-relevant information that the provider does not have.

The wording of Article 73 requires the following in the event of a serious incident:

  1. Definition of a "serious incident" in relation to an AI system.
  2. Obligation to report such incidents by providers and in certain cases also by operators or representatives.
  3. Time limits for reporting (within certain deadlines after becoming known).
  4. Mandatory information to be provided in the report (e.g. system identification, description of the incident, effects, measures).
  5. Cooperation with market surveillance authorities and national bodies and, where appropriate, with the Commission and the AI Board.
  6. Special reporting obligations to the Commission for models with systemic risk (e.g. large generative AI).


It is important to note that this reporting obligation is in addition to other reporting or Duty to inform (e.g. in data protection law, in the event of cyber security incidents).

The Commission emphasizes the coordination with existing reporting obligations, for example under the NIS2 Directive, the Digital Operational Resilience Act (DORA) or the General Data Protection Regulation. The aim is to avoid double reporting and create coherent procedures. At the same time, the aim is to link up with international standards such as the OECD AI Incidents Monitor.

Content of the draft guidelines on reporting obligations

The draft contains:

  • Definitions of terms: Clarifications on "serious incident", "malfunction", "unexpected behavior" and "drop in accuracy".
  • Examples: Scenarios from practice - such as misclassifications, unforeseen system reactions or significant loss of accuracy.
  • Reporting process: Timing, responsibilities and communication channels between providers, operators, authorities and the Commission.
  • Form structure: Standardized reporting form with graduated mandatory information, which should also ensure international compatibility.


It remains to be seen how detailed the information must be and to what extent provisional notifications are permissible.

Link tip: Second phase of the AI Regulation launched - what applies from August 2025?

Open legal questions regarding "serious AI incident"

The draft raises a number of difficult legal and practical questions:

Definition of "serious AI incident"

A key area of tension arises in the precise definition of which incidents are reportable. Even formulations such as "significant drop in accuracy", "unexpected behavior" or "system failure" leave considerable room for discretion. Operators and providers must develop criteria to reliably apply these normative terms.

The Guidelines could introduce more far-reaching thresholds, for example quantitative thresholds. However, this entails the risk that legitimate incidents are not reported because they do not exceed the threshold. Or that there will be many false reports.

Responsibility of providers vs. operators

The allocation of reporting obligations between providers, operators and other stakeholders is not trivial. Operators are often unable to gain sufficient technical insight to identify specific causes. Conversely, providers often do not have direct access to all real operating conditions.

The Guidelines must therefore clearly regulate in which cases the provider alone is obliged to report and in which cases the operator must be involved (e.g. as the party obliged to report or as the party obliged to provide information).

Reporting thresholds, reporting deadlines and completeness

The deadlines for reporting incidents play a central role. The draft does specify reporting deadlines. However, it remains unclear how realistic it is to adhere to these deadlines in complex AI systems. In situations where the causes only need to be identified after a detailed analysis, there is a risk that the original report is incomplete or needs to be adjusted retrospectively.

The question also arises as to how complete the first notification must be. For example, does all information have to be available at the time of notification or can a preliminary notification be made that is supplemented later?

Interfaces to data protection and security law

Many AI incidents could simultaneously be classified as data breaches (e.g. incorrect classification of persons) or security incidents. There is a risk of contradictory reporting obligations and inconsistencies between responsibilities.

Recommendations for action in practice

1. development of an incident management system

Companies should adapt their compliance structures:

  • Definition of internal reporting criteria,
  • Rules of responsibility and escalation paths,
  • technical traceability and Documentation,
  • Training of relevant employees.

2. participation in the Consultation

The Consultation runs until November 7, 2025. Providers, operators and industry associations should take the opportunity to provide practical feedback and to improve the design of the Guidelines to help shape the future.

3. integration into existing compliance systems

AI reporting structures should be linked to data protection and cyber compliance processes in order to create consistent processes and synergies.

4. international voting

For multinational providers, a centralized reporting system that takes national differences into account and maps international reporting formats is recommended.

The Guidelines for reporting serious AI incidents are an important step towards the practical implementation of the AI Act. They will have a decisive impact on the handling of AI risks.

In practice, the following applies: now is the right time to review internal processes, establish reporting channels and actively participate in the Consultation participate. This is the only way to ensure that the final specifications remain practicable, legally compliant and technologically feasible.

Source: Targeted stakeholder consultation on the Commissions Draft Guidance on Article 73 AI Act - Incident Reporting (High-Risk AI Systems)

Practical solution: AI governance with Ailance

The new reporting obligations of the AI Regulation make this clear: Companies need a structured, traceable and audit-proof AI governance system. With Ailance AI governance 2B Advice offers a tried-and-tested solution for implementing regulatory requirements.Ailance supports you with:

  • the Risk classification and Documentation of AI systems,

  • the Recognition and assessment of serious incidents,

  • the Integration of reporting obligations into existing compliance workflows,

  • the Creation of compliant reports for authorities and supervisory bodies,

  • and the Ongoing governance and audit capability of your AI applications.

Shape your AI compliance proactively now, before reporting obligations become a burden.

Aristotelis Zervos is Editorial Director at 2B Advice, a lawyer and journalist with profound expertise in data protection, GDPRIT compliance and AI governance. He regularly publishes in-depth articles on AI regulation, GDPR compliance and risk management. You can find out more about him on his Author profile page.

Tags:
Share this post :