New NIST program: Cybersecurity and privacy risks in the age of AI

With the new program, NIST wants to ensure that companies can take advantage of the benefits of AI and effectively manage the risks.
Categories:

The National Institute of Standards and Technology (NIST) is a U.S. federal agency known worldwide for developing standards and guidelines in areas such as technology, science and cybersecurity. NIST plays a leading role in safely and responsibly introducing new technologies to various industries. In light of the rapid development of artificial intelligence (AI), NIST has now launched a new program that focuses on the cybersecurity and privacy aspects of AI, as well as the use of AI in cybersecurity.

The impact of AI on data protection and cyber security

The strength of AI lies in its ability to analyze large amounts of data from different sources. This makes it a valuable tool for innovation. However, this ability also poses new risks for data protection - particularly in relation to re-identification and data leaks during the development of models. This is because the predictive capabilities of AI can reveal sensitive information about individuals and increase surveillance and behavioral tracking.

At the same time, AI can also be used to improve data protection: For example, through personal data protection assistants that support users in managing their data protection settings on various online platforms.

Cybersecurity is also both strengthened and presented with new challenges by AI. AI-powered tools can improve threat detection and response capabilities, but also increase the complexity of dealing with false positives. In addition, AI could serve as a new attack vector in cybersecurity, making it necessary to protect AI systems against traditional and new threats. For organizations, this means re-evaluating their data management practices and cybersecurity strategies and adapting them to the specific risks posed by AI.

Holistic approach to the risk management of AI

NIST has already developed the AI Risk Management Framework (AI RMF) to manage the benefits and risks associated with AI for individuals, organizations and society. It covers a broad spectrum of risks ranging from safety to lack of transparency and accountability.

As AI increasingly impacts cybersecurity and data privacy at the enterprise level, NIST is now ramping up its efforts to ensure a comprehensive approach to AI-related risks.

The most important initiatives on which the new program is based include

  • Safe software development practices for generative AI (NIST SP 218A)
  • Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Countermeasures (NIST AI 100-2)
  • Differential Privacy Assessment Guidelines (NIST SP 800-226)
  • Integration of security in AI in the NICE Security Framework
  • Tools such as Dioptra, a test platform for evaluating machine learning algorithms, and the PETs Testbed, which examines data protection-friendly technologies.
 

Focus of the new NIST program

  • The newly launched program will work with a broad range of stakeholders from industry, government and academia. The aim is to ensure that efforts to secure AI are comprehensive and coordinated at a national and international level. Coordination with other NIST programs and U.S. agencies will help develop a coherent approach to addressing the cybersecurity and privacy challenges associated with AI.

The focal points of the program are

  • Addressing the cybersecurity and data protection risks posed by the use of AI, including securing AI systems, minimizing data leakage and securing machine learning infrastructure.
  • Development of strategies to defend against AI-supported attacks.
  • Supporting organizations in the use of AI to improve cyber defense and data protection.

Leveraging AI benefits and managing risks

The development of AI brings both exciting opportunities and critical risks to cybersecurity and data privacy. With this new program, NIST aims to ensure that organizations can reap the benefits of AI while effectively managing the associated risks. Building on a solid foundation of existing research, guidance and tools, and in collaboration with diverse stakeholders, NIST aims to provide leadership in the safe and responsible use of AI.

As AI continues to shape the world, addressing the associated cybersecurity and data protection challenges is crucial to ensure its long-term success and societal benefit.

Source: Managing Cybersecurity and Privacy Risks in the Age of Artificial Intelligence: Launching a New Program at NIST

Tags:
Share this post :
en_USEnglish