The French CNIL's action plan for the regulation of AI systems


Artificial intelligence: the French CNIL's action plan for regulating AI systems


In light of recent developments in the field of artificial intelligence (AI), and in particular generative AIs such as ChatGPT, the Commission Nationale de l'Informatique et des Libertés (CNIL) has published an action plan aimed at deploying AI systems that respect the privacy of individuals. Generative AI is an emerging area of AI that enables the generation of text, images and other content based on user instructions. In this article, we will look at the details of this action plan and discuss its importance for privacy protection.

The role of the CNIL

The CNIL has already carried out extensive work in recent years to prevent and address the challenges associated with AI. In 2023, the CNIL will continue its actions to regulate augmented cameras and extend its work to generative AIs, large language models and their derived applications such as chatbots. The CNIL's action plan focuses on four main areas:

Understanding how AI systems work and their impact on individuals.
Promoting and regulating the development of privacy-friendly AI.
Support and collaboration with innovative players in the AI ecosystem in France and Europe.
Testing and control of AI systems to protect individuals.
These measures also serve to prepare for the implementation of the European AI Regulation currently under discussion.

The protection of personal data as a key challenge

As AI advances, there are increasing challenges in the area of data protection and the protection of individual freedoms. The CNIL has been commenting on the issues raised by this technology since 2017 in its report on the ethical challenges of algorithms and artificial intelligence.

Generative AIs have made great progress in recent months, especially in the text and conversation area. By using large language models such as GPT-3, BLOOM or Megatron NLG and derived chatbots such as ChatGPT or Bard, these systems can generate texts that come very close to those created by humans. There have also been significant developments in the area of image and speech generation. Although these models and technologies are already being used in various industries, their functionality, possibilities and limitations, as well as the associated legal, ethical and technical challenges, are still the subject of intense debate.

The CNIL publishes its action plan on the regulation of artificial intelligence to emphasize the importance of protecting personal data in the development and use of these tools. In particular, its focus is on generative AIs.

What are generative AIs?

Generative AIs are systems that are able to create text, images and other content (music, videos, speech, etc.) based on instructions from human users. These systems can generate new content from the data used for their training. Thanks to the extensive training dataset, they can achieve results that are already very close to human productions. However, it is important that users clearly specify their queries in order to achieve the desired results. Therefore, specific expertise is developing in relation to the formulation of user queries (prompt engineering).

The CNIL's action plan in four steps

In recent years, the CNIL has carried out extensive work on the regulation of AI, taking into account different types of AI systems and their use cases. Its action plan focuses on four main objectives:

  • Functionality and impact of AI on the privacy and rights of individuals
  • Creation of a legal framework for the use of AI systems that guarantees data protection
  • Support and collaboration with AI innovators in France and Europe
  • Review and control of AI systems to ensure the rights and protection of individuals

"What I like about the action plan is that the initial aim is to develop an understanding of how AI systems work and also whether and what impact it has on individual persons," comments Marcus Belke, CEO of the 2B Advice Group, a group of companies that has been providing data protection solutions for 20 years. "What is missing, however, is an agenda item on the opportunities that AI offers for data protection," continues Marcus Belke.

The CNIL will strengthen its control measures and, in particular, examine the use of generative AIs to ensure that companies that develop, train or use AI systems have taken appropriate measures to protect personal data. Its aim is to create clear and protective rules for the handling of personal data in AI systems.

Thanks to these comprehensive measures, the CNIL aims to contribute to the development of privacy-friendly AI systems while protecting the privacy and rights of European citizens.

With its action plan, the CNIL hopes to support data protection officers (DPOs) in dealing with the challenges associated with artificial intelligence (AI). By providing clear guidelines and measures to ensure data protection in AI systems, data protection officers in companies and organizations can be better prepared to deal with the impact of AI technologies. The CNIL's action plan provides a basis for building a privacy-friendly framework for the use of AI and helps data protection officers to implement effective data protection measures and protect the rights and freedoms of data subjects. The CNIL's work is therefore a valuable tool for data protection officers to tackle the data protection challenges associated with AI and ensure the protection of personal data.

Share this post :