ThinkTank_Logo_black
The wait is over
Ailance™ ThinkTank is here!

Data Protection Day 2025: New challenges through AI

Data Protection Day 2025: New challenges through AI
Categories:

Data Protection Day, which is celebrated annually on 28 January, provides an important opportunity to draw attention to the importance of data protection in an increasingly digitalized world. This year, the focus is particularly on the use of artificial intelligence (AI), as this technology presents both opportunities and significant risks for the protection of personal data.

AI systems: risks for data protection

AI systems have developed rapidly in recent years and are now ubiquitous in many areas of public and private life. They analyze data, make decisions and automate processes.

However, their dependence on large amounts of data poses immense challenges for data protection:

  1. Massive data processing:
    AI systems require large amounts of personal data in order to function. There is a risk that sensitive data will be processed without a sufficient legal basis or passed on without authorization.

  2. Non-transparent decision-making:
    Many AI systems operate as so-called "black boxes" whose decision-making processes are difficult to understand, even for experts. This makes it difficult for data subjects to exercise their rights to transparency and information under the GDPR.

  3. Discrimination and bias:
    One of the greatest dangers of AI is algorithmic bias. Inequalities and prejudices in the underlying data sets can lead to AI systems making discriminatory decisions - for example in application procedures or when granting loans.

  4. Cybersecurity risks:
    AI systems can themselves become the target of cyber attacks. Manipulated models or stolen data can cause considerable damage.

At the same time, AI systems not only offer challenges, but also considerable potential for data protection:

  1. Automated detection of data protection violations:
    AI systems can analyse large amounts of data in real time and detect potential data breaches or unauthorized access faster than conventional methods.

  1. Improved data security:
    By using AI-supported security mechanisms such as anomaly detection or predictive analytics, threats can be proactively detected and averted.

  1. Support in complying with the GDPR:
    AI can support companies in complying with complex data protection requirements, for example by automating data access or data erasure processes.

  1. Efficient data minimization:
    AI systems can help to process only the minimum amount of data required by identifying and sorting out irrelevant or redundant information.

AI principle from Art. 22 para. 1 GDPR: No automated final decision

The GDPR has already laid down a principle for the use of AI in advance: According to Art. 22 para. 1 GDPR, decisions with legal effect may only be made by humans. Exceptions are only permitted in certain cases, such as with the consent of the data subject.

If an AI application makes proposals with legal effect for the person concerned, the procedure must be designed in such a way that the person making the decision has an actual scope for decision-making and the decision is not made primarily on the basis of the AI proposal. Insufficient human resources, time pressure and a lack of transparency regarding the decision-making process of the AI-supported preliminary work must not lead to the results being accepted without review. The merely formal involvement of a human being in the decision-making process is not sufficient.

Example from the "Artificial intelligence and data protection" guidance issued by the Data Protection ConferenceAn AI application evaluates all incoming applications for an advertised position and independently sends out invitations to interviews. This constitutes a violation of Art. 22 para. 1 GDPR.

For the data protection officer (DPO), the scope of action could be significantly expanded, especially with regard to Art. 22 GDPR: In addition to their role in terms of the GDPR, they could also take on the function of an AI officer. The DPO's experience in handling sensitive and biometric data and their in-depth knowledge of data protection regulations and practice speak in favor of this.

AI Act is ignited: Starting in February 2025

As the rules of the GDPR are not sufficient for the regulation of AI, the European Union has created a new legal framework for artificial intelligence with the AI Act. This law supplements the GDPR and is intended to ensure that AI systems in Europe are both secure and compliant with data protection regulations. While the GDPR primarily regulates the processing of personal data, the AI Act sets out specific requirements for the use of AI, such as risk classifications and transparency obligations. The aim is to create a legal symbiosis in which data protection and innovation work together harmoniously. Among other things, the AI Act provides for

  • A risk classification of AI systems (low, limited, high and inadmissible).
  • Strict requirements for high-risk systems, such as those used in medicine or law enforcement.
  • Transparency obligations that ensure that users are informed about the use of AI.


The next stage of the AI law will be triggered on February 2, 2025. From then on, AI systems that pose an unacceptable risk will be banned in the EU. However, there will still be exceptions: law enforcement authorities will still be allowed to use facial recognition on video surveillance images.

Reading tip: AI Act comes into force - here's what happens now

Every second company wants to use AI for data protection

According to a survey by the digital association Bitkom (July/August 2024), almost half of companies (48%) want to use AI for data protection.

This includes chatbots for employees to explain data protection issues, the detection of data protection violations by an AI or the automated anonymization or pseudonymization of data. 5 percent already use such AI applications, 24 percent have already planned to use them. And a further 19 percent are still discussing it.

At the same time, 68% of companies believe that the use of AI poses completely new challenges for data protection. 53 percent say that data protection creates legal certainty. 52 percent believe that data protection hinders the use of AI.

Conclusion: AI systems offer many possibilities, but there are also data protection issues. It's not just about technology, but also about ethics and the law. And not just in the EU. Only with clear rules and close monitoring of compliance can the potential of AI be exploited without the protection of personal data falling by the wayside.

With Ailance, you can use AI in your company to solve data protection and compliance issues quickly and easily. Ailance also supports your company in the automatic anonymization and pseudonymization of data. Contact us, we will be happy to advise you!

Phone: +1 (954) 852-1633
Mail: info@2b-advice.com

Tags:
Share this post :
en_USEnglish