On December 18, 2024, the European Data Protection Board (EDPB) published a comprehensive opinion on the data protection-compliant development and use of AI models. The opinion addresses the most important issues relating to the processing of personal data and provides practical guidelines for companies and organizations.
Anonymity of AI models
A central topic of the EDPB statement is defining and ensuring the anonymity of AI models. The anonymity of a model is of crucial importance. This is because it determines whether the processed data is still considered personal data and therefore subject to the requirements of the GDPR. The EDPB emphasizes that a model can only be considered anonymous if the identification of the data subjects is technically practically impossible. It must also not be possible to extract personal data from the model through targeted queries or reconstruction.
To ensure anonymity, the committee recommends the use of specific procedures. The most important methods are K-anonymity, differential privacy and pseudonymization. K-anonymity aims to disguise individual data records in a group in such a way that they can no longer be clearly assigned to a person. Differential privacy goes one step further and ensures that changes in the database do not have a significant impact on the model, thereby minimizing the risk of re-identification. Pseudonymization, on the other hand, replaces the identifying characteristics with pseudonyms, but does not offer complete protection against re-identification and can be reversed under certain conditions.
Another important aspect is the implementation of data protection impact assessments in accordance with Art. 35 GDPR, especially if there is a high risk to the rights and freedoms of data subjects. Companies should not only take technical but also organizational measures to ensure that their AI models are trained and used in compliance with data protection regulations. This includes, for example, regularly checking the anonymity of the data and implementing control mechanisms to prevent unintentional re-identification.
Legitimate interest: Three-stage test
A key component of the EDPB opinion is the legal classification of legitimate interest as a possible legal basis for the processing of personal data in the context of AI models. According to Art. 6 para. 1 lit. f GDPR, processing may be permitted if there is a legitimate interest of the controller or a third party that outweighs the interests or fundamental rights and freedoms of the data subject. In order to carry out this weighing up in a legally secure manner, the EDPB proposes a three-stage test that requires careful examination and documentation.
First of all, the controller must clearly define the purpose of the processing and justify why the processing of personal data is necessary to pursue this interest. A legitimate interest may lie, for example, in the development of innovative AI technologies or the improvement of services through data-driven analyses. Care must be taken to ensure that this interest is not vague or speculative, but is based on a factual and demonstrable basis.
The second step is to check whether the processing is actually necessary to achieve the specified purpose. This means that the controller must analyze whether there are alternative means by which the same purpose could be achieved with less intrusive measures for the data subjects. If this is the case, the processing can be considered disproportionate and therefore not based on the legitimate interest.
The third and decisive step is to weigh up the interests of the controller against the fundamental rights and freedoms of the data subjects. In particular, the reasonable expectations of the data subjects must be taken into account. If a data subject does not reasonably expect their data to be processed for a specific purpose, this can be a strong indication that the processing is unlawful. Similarly, special safeguards must be considered to minimize the risk to data subjects. For example, through transparent communication about data processing or additional technical security measures.
Consequences of unlawful data processing
Another concern of the EDPB is the handling of unlawfully processed personal data in AI models. The opinion clarifies that subsequent legitimization of unlawful data processing is not possible. Data controllers are obliged to take appropriate measures to minimize the impact of unlawful processing and to comply with data protection requirements.
These measures primarily include the complete deletion of the unlawfully collected data from the AI model and the associated databases. If deletion is not technically feasible, for example because the model has already been trained with the data in question and extraction is no longer possible, the EDPB recommends alternative solutions. One of these alternatives is to retrain the model with a new, lawful database (model retraining with lawfully collected data). In cases where this is not practicable, it may be necessary to restrict the use of the model concerned.
In addition, the EDPB emphasizes that companies are obliged to document existing data breaches and take appropriate internal measures to prevent future unlawful data processing. These include the regular performance of data protection impact assessments in accordance with Art. 35 GDPR and the establishment of mechanisms for the ongoing monitoring of data processing operations.
Reading tip: AI Regulation (KI-VO) - This will apply to companies from February 2025
Practical consequences for companies
The EDPB statement provides practical advice for companies that develop or use AI technologies. The following recommendations can be derived from this:
The EDPB clarifies that the anonymity of AI models does not depend solely on the intention of the controller, but must be ensured through appropriate technical measures and regular review processes. Companies that use AI technologies must ensure that they take appropriate data protection measures and continuously review their models for potential data protection risks.
The application of the three-stage test to assess the existence of a legitimate interest is not just a formal requirement. It requires a substantive examination of the impact of data processing on the data subjects. Companies should therefore implement transparency measures (in accordance with Art. 12 GDPR) and document in detail the reasons why their legitimate interest prevails. This can be done through data protection impact assessments or a detailed internal risk analysis.
In cases of doubt, coordination with the data protection authorities is recommended in order to avoid legal uncertainties and ensure GDPR compliance.
Conclusion: EDSA holds companies accountable for AI
The EDPB opinion provides important guidelines for the data protection-compliant use of AI technologies. It makes it clear that the development and use of AI models is subject to strict requirements regarding anonymity, legal basis and data processing. Companies should therefore implement suitable measures to ensure GDPR compliance and avoid liability risks. By handling personal data transparently and responsibly, the innovative power of AI can be reconciled with the data protection rights of users.
Do you want to make your company fit for the use of AI? We'll tackle it together! Our experts will be happy to advise you. Give us a call or write to us:
Phone: +1 (954) 852-1633
Mail: info@2b-advice.com