The AI Regulation (AI Regulation) takes a risk-based approach to promote innovation while protecting fundamental rights. A central part of the AI Regulation is the prohibition of certain AI practices that are considered incompatible with the fundamental values of the Union. The European Commission has now published guidelines that specify the application and enforcement of the prohibitions under Article 5 of the AI Regulation. The most important points at a glance.
Background to the prohibitions in Article 5 of the AI Regulation
The prohibitions in the AI Regulation are a central element of the regulation of artificial intelligence in the European Union and are based on the protection of fundamental human rights and the preservation of democratic principles and the rule of law. They are intended to ensure that certain AI technologies are not used in a way that jeopardizes fundamental EU values such as human dignity, non-discrimination, transparency and privacy.
The increasing spread of AI technologies brings with it considerable challenges, particularly with regard to control over automated decision-making processes, the potential manipulation of human behavior and uncontrolled access to personal data. The ban on particularly problematic AI applications therefore serves to avert the serious risks and dangers that these technologies can pose.
In particular, the bans are intended to prevent applications that use deliberate manipulation or deceptive mechanisms to control the behavior of individuals or groups without them being aware of the underlying processes. The misuse of biometric data is also to be curbed and the use of systems for social control and discrimination excluded.
The aim of these prohibitions is therefore to safeguard the autonomy and freedom of choice of the individual, to ensure fairness and transparency in AI-supported processes and to avoid social and economic inequalities that could arise as a result of non-transparent or discriminatory AI systems.
The AI practices listed in Article 5 of the Regulation are therefore not only problematic on a technical or economic level, but are also unacceptable from an ethical and social point of view. The clear regulation of these technologies is intended to ensure that AI applications are developed and used for the benefit of society without compromising fundamental rights and freedoms.
The AI Regulation distinguishes between several categories of prohibited AI applications:
Manipulative and misleading AI systems (Art. 5 para. 1 lit. a AI Regulation)
This regulation prohibits AI systems that use subversive, manipulative or deceptive techniques to influence the behavior of individuals. This includes, among other things:
- Subliminal messagesAI systems that place visual or auditory stimuli outside the conscious field of perception in order to influence unconscious behavior or decisions. These techniques can be implemented using ultra-fast image flashes or inaudible acoustic signals.
- Psychological or cognitive manipulationAI systems that specifically exploit psychological weaknesses or cognitive biases in order to persuade users to take certain actions or make certain decisions. Examples include personalized advertising based on individual weaknesses or AI-supported user interfaces that distort decision-making processes through "dark patterns".
- Disinformation and misleading contentAI systems that deliberately generate and disseminate false or distorted information in order to influence public opinion or individual decisions. This includes deepfakes, synthetic news articles or AI-supported bots that manipulate opinion on social media platforms.
- Compulsive interaction mechanismsAI-supported systems that deliberately entice users to stay engaged for longer by exploiting dependency mechanisms such as reward systems or "endless scroll" functions. This can be particularly problematic for children and young people, who are more susceptible to such mechanisms.
These practices are particularly problematic as they undermine the freedom of choice and cognitive autonomy of those affected. Such AI systems can have far-reaching effects on democracy, the economy and society, especially if they are used to specifically influence voting behavior or purchasing decisions.
Exploitation of vulnerability (Art. 5 para. 1 lit. b AI Regulation)
The AI Regulation prohibits the use of AI systems that specifically exploit the vulnerabilities of individuals or groups due to their age, disability or particular social or economic situation. This regulation aims to protect vulnerable people from unfair or manipulative AI applications that could undermine their freedom of choice and autonomy.
Prohibited practices include:
- Targeted influencing of children and older peopleAI-supported applications that specifically target children or older people in order to entice them into certain behaviors or purchasing decisions, for example through manipulative advertising or deliberately misleading information.
- Financial exploitation of socially or economically disadvantaged groupsSystems that specifically manipulate low-income groups with economically disadvantageous offers, for example by suggesting excessive credit offers or unfair insurance conditions through algorithmic decision-making processes.
- Abuse of cognitive impairmentsAI systems that are deliberately designed to exploit cognitive limitations or a lack of digital literacy in order to deceive individuals or impose unfair contractual terms on them.
- Exploitation of addictions and addictive disordersAI-supported platforms that aim to drive people at risk of addiction further into addiction through targeted advertising or interaction mechanisms, for example through the algorithmic optimization of gambling or microtransaction systems.
These prohibitions serve to guarantee the integrity and safety of vulnerable groups and ensure the ethical use of AI technologies. AI applications must be designed in such a way that they do not take unfair advantage of the particular circumstances or weaknesses of individual groups. Implementing these requirements will ensure that technological progress is not made at the expense of those who are particularly dependent on protection and fairness.
Social scoring (Art. 5 para. 1 lit. c GDPR)
A key innovation in the AI Regulation is the comprehensive ban on social scoring systems used by public or private actors. Social scoring refers to the algorithmic evaluation of individuals based on their behavior or personal characteristics in order to derive social or economic consequences. These systems are particularly problematic as they encroach deeply into the privacy of citizens and can lead to unjustified discrimination.
The prohibition applies in particular if:
- an assessment is made of social behavior or personal characteristics of individuals or groups,
- negative consequences for individuals in unrelated areas of society, such as exclusion from educational opportunities or financial services due to a poor social score,
- disproportionate sanctions or privileges result from the assessments, such as making access to loans or housing more difficult on the basis of non-objective, discriminatory criteria.
Social scoring is primarily known from authoritarian states, where it is used to control and monitor citizens. The AI Regulation makes it clear that such systems are incompatible with the fundamental values of the European Union, in particular respect for human dignity, equal treatment and privacy.
Particular attention is being paid to the enforcement of this ban by national market surveillance authorities and data protection institutions. Companies and authorities that develop or use AI systems in Europe must ensure that their systems do not contain any unauthorized social scoring mechanisms. Violations of this ban can be punished with heavy fines to ensure that the misuse of AI for social control is prevented in the EU.
AI-assisted prediction of criminal offenses (Art. 5 para. 1 lit. d AI Regulation)
The ban extends to AI systems that predict the risk or probability of a criminal act on the basis of personal characteristics, social background or statistical probabilities. The AI Regulation makes it clear that such systems pose significant risks to fundamental rights, in particular the presumption of innocence, the protection of privacy and the prohibition of discrimination. The use of such technologies can lead to systemic discrimination against certain population groups, especially if distorted or inadequate data is used to make decisions.
Problematic aspects of such AI systems:
- Discrimination and biasAI models are often based on historical data that reflect existing prejudices or structural inequalities. As a result, there is a risk that certain groups are disproportionately classified as suspicious.
- Lack of transparency and traceabilityDecision-making by AI systems is often opaque and difficult to understand, which makes effective control and legal review difficult.
- Presumption of innocenceAutomated assessment of the individual risk of a criminal offence can lead to people being stigmatized or monitored without an actual criminal basis.
There are exceptions for AI systems that use objective and verifiable data to support human assessments. For example, systems can be used to identify already known criminal patterns in data analyses if these are only used for decision-making by law enforcement authorities and there is no automated classification of people as potential offenders.
Overall, the use of such technologies in the EU remains highly sensitive and requires strict compliance with the principles of the rule of law and regular monitoring by independent supervisory authorities.
Untargeted collection of biometric data (Art. 5 para. 1 lit. e GDPR)
The untargeted collection of biometric data poses a significant threat to the privacy and informational self-determination of the persons concerned. The AI Regulation therefore expressly prohibits the mass, uncontrolled collection of facial images, fingerprints or other biometric characteristics from public sources, in particular through techniques such as scraping.
Such practices are often used to create large-scale biometric databases that can then be used for a variety of purposes, such as facial recognition or identity verification without the explicit consent of the individuals concerned. This raises significant data protection and ethical concerns, in particular because:
- Lack of consentData subjects have no control over the collection and processing of their biometric data.
- Risks of mass surveillanceSuch databases can be used by state and private actors to carry out far-reaching surveillance measures.
- Risk of abuseBiometric data could be used for law enforcement or commercial applications without knowledge or consent.
- Risk of error and discriminationAI-based facial recognition systems have been shown to have a higher error rate for certain ethnic groups and can lead to unjustified suspicion or discrimination.
The regulation therefore prohibits the creation or expansion of such databases through untargeted scraping from publicly accessible sources such as social networks or surveillance cameras. Companies and organizations that develop or use biometric technologies must adhere to strict data protection guidelines and transparency requirements in order to prevent the misuse of sensitive personal data.
Biometric data collection is only permitted under strict conditions, for example if the data subject has given their express, informed consent or if legal regulations allow this in accordance with fundamental rights. National data protection authorities are required to consistently pursue violations of these provisions and impose severe penalties.
Emotion recognition at the workplace and in educational institutions (Art. 5 para. 1 lit. f AI Regulation)
The ban concerns the use of AI systems that analyze the emotions of employees or students, as this can have a significant impact on privacy, personal protection and individual autonomy. Emotion recognition technologies use facial recognition, speech analysis or other biometric signals to determine a person's emotional state. The EU classifies these practices as particularly critical, as they have potentially discriminatory effects and can lead to unlawful surveillance measures.
Why is its use problematic?
- Invasion of privacyEmotion recognition systems collect and analyze biometric data without the persons concerned always being able to give their conscious consent.
- Misinterpretation and lack of transparencyThe accuracy of these systems is controversial, as emotions can be interpreted subjectively and culturally in different ways. This can lead to wrong decisions that have serious consequences, particularly in the workplace or educational environment.
- Discrimination and stigmatizationPeople with certain facial expressions or communication styles could be disadvantaged or miscategorized, which can lead to unfair assessments or even exclusion from educational or employment opportunities.
The ban provides for Limited exceptions in particular if the emotion recognition serves the following purposes:
- Medical applicationsIn diagnostics or therapy, for example to support people with neurological disorders or communication difficulties.
- Safety-related measuresIn narrow application areas, such as the detection of acute threats or the identification of people in emergency situations (e.g. in aviation or critical infrastructure projects).
Organizations and companies that violate the ban can be subject to severe fines. Regulatory authorities will be required to monitor the use of such technologies and ensure that they do not lead to unauthorized surveillance or unwarranted invasions of privacy.
Reading tip: AI competence and company obligations under Article 4 of the AI Regulation
Biometric categorization of sensitive characteristics (Art. 5 para. 1 lit. g GDPR)
The biometric categorization of sensitive characteristics by AI systems is a particularly sensitive area, as it directly interferes with the fundamental rights of the persons concerned. The AI Regulation therefore prohibits any use of biometric data to derive certain personal characteristics related to a person's identity and protected characteristics.
Sensitive features include in particular:
- Racial or ethnic originAI systems must not be used to classify people according to racial or ethnic criteria, as this can lead to discrimination and unlawful unequal treatment.
- Political convictionsThe analysis of biometric data to determine political attitudes can lead to the persecution and suppression of dissenting opinions and is therefore strictly prohibited.
- Religious or ideological viewsAI systems that use biometric data to draw conclusions about a person's religious affiliation or ideological beliefs are prohibited in order to protect freedom of religion and belief.
- Sexual orientationDeriving sexual orientation from biometric data is a serious invasion of privacy and can lead to discrimination and stigmatization.
- Trade union membershipThe analysis of biometric characteristics to determine trade union membership is prohibited in order to guarantee freedom of association and the protection of workers' rights.
Why is biometric categorization problematic?
- Lack of scientific validityMany methods for deriving sensitive characteristics from biometric data are based on unproven or pseudoscientific assumptions and are therefore extremely prone to error.
- Risks of discriminationSuch systems can reinforce existing prejudices and social inequalities by disadvantaging or stigmatizing certain groups.
- Ethical and legal concernsThe use of these technologies violates fundamental data protection principles and cannot be reconciled with European fundamental rights.
Companies that violate this regulation can be fined heavily in order to ensure the protection of fundamental rights and individual freedom.
Real-time biometric remote identification for law enforcement (Art. 5 para. 1 lit. h of the AI Regulation)
The ban on the use of AI-based facial recognition in public spaces by law enforcement authorities is one of the strictest provisions of the AI Regulation. Facial recognition systems have the potential to enable comprehensive surveillance measures and therefore pose a significant threat to citizens' informational self-determination.
Problematic aspects of biometric real-time remote identification:
- Mass surveillance and data protectionThe widespread use of facial recognition systems could lead to constant surveillance in public spaces, creating a climate of control and fear.
- Risk of error and discriminationBiometric identification systems have been shown to be prone to error and have higher error rates, especially among certain ethnic groups, which can lead to unjustified suspicions or even arrests.
- Potential for abuseThe uncontrolled use of such technologies could allow law enforcement agencies or other state actors to track the movement profiles of individuals without sufficient legal basis.
- Rule of law concernsThe identification of individuals in real time could restrict the right to anonymity in public spaces and in certain cases violate the principles of proportionality.
Despite the fundamental prohibition, the AI Regulation provides for a few Exceptions in which the use of the technology is permitted under strict conditions:
- Targeted search for missing personsIf a person has been reported missing and there is an acute danger to life, facial recognition can be used under strict legal requirements.
- Combating terrorismIn cases where there is an imminent terrorist threat, the technology can be used to identify individuals who may be involved in attacks.
- Serious offensesNarrowly limited use of facial recognition may be permitted in certain situations where serious crimes are to be investigated and no other less intrusive measures are available.
To prevent misuse, all applications of real-time remote biometric identification must be approved and regularly reviewed by independent supervisory authorities. Unauthorized or disproportionate use can be punished with heavy fines or legal action. This ensures that technological progress does not come at the expense of fundamental rights and that the principles of the rule of law are respected.
Sanctions for violation of the AI Regulation
The bans are enforced by various institutions at national and European level. In particular, the national market surveillance authorities are responsible for monitoring compliance with the regulations. They have the power to investigate violations, take measures to prevent unauthorized AI practices and impose sanctions. In parallel, the European Data Protection Supervisor plays a key role, especially when it comes to breaches in the area of personal data processing.
Violations of the AI Act's prohibitions can result in significant financial penalties. The regulation provides for fines of up to 35 million euros or 7 % of a company's global annual turnover - whichever is higher. These sanctions are intended to ensure that companies and organizations comply with the legal requirements and do not develop or use any unlawful or harmful AI systems. In addition to financial penalties, operating restrictions or sales bans can also be imposed for non-compliant AI systems.
In addition, companies are obliged to submit regular compliance reports. In serious cases, criminal penalties may also be provided for in individual member states if negligent or intentional violations of fundamental rights through the use of prohibited AI technologies are proven.
First guidelines on the AI Regulation set strict standards
The EU guidelines for implementing the AI Regulation create clear rules for dealing with artificial intelligence and set strict standards for safeguarding fundamental rights. While protection against the misuse of AI is strengthened, the challenge is not to inhibit innovation. The future case law of the ECJ will contribute significantly to the interpretation of the regulations and create clarity as to the extent to which specific AI applications meet the requirements of the AI Regulation.
Source: Guidelines on prohibited artificial intelligence practices under the AI Regulation
Do you want to make your company fit for the AI regulation? We'll tackle it together! Our experts will be happy to advise you. Give us a call or write to us:
Phone: +1 (954) 852-1633
Mail: info@2b-advice.com