SB 53: New AI law in California requires developers to be transparent

With SB 53, California has published the first AI law.
Categories:
Picture of Marcus Belke

Marcus Belke

CEO of 2B Advice GmbH, driving innovation in privacy compliance and risk management and leading the development of Ailance, the next-generation compliance platform.

At the end of September, California set a milestone in the regulation of AI in the USA. Governor Gavin Newsom signed the "Transparency in Frontier Artificial Intelligence Act" (SB 53) on September 29, 2025. The law obliges the state's major AI companies to comply with new transparency and security requirements. Companies affected include OpenAI, Google, Meta, Nvidia and Anthropic. It is one of the first laws in the US to explicitly address the risks of advanced AI models.

Transparency obligations for AI companies

SB 53 applies to developers and service providers with an annual turnover of more than 500 million US dollars. They must prove that they have established responsible AI practices and recognize risks at an early stage. For users, this means that they will receive binding safety information on the models used for the first time.

The central requirements are:

  • Disclosure of risk and security plansCompanies must submit public reports in which they explain how they ensure the security of their models and prevent misuse. This includes questions such as: How are systems prevented from getting out of control or being misused for the development of bioweapons?
  • Reporting of serious incidentsAI-related incidents must be reported to the Californian Office of Emergency Services via a central reporting procedure.
  • Whistleblower protectionWhistleblowers who report risks or violations must be effectively protected from reprisals.
  • FinesViolations can be punished with a fine of up to one million US dollars. The Attorney General is responsible for enforcement.


The regulation is deliberately broad-based, as 32 of the 50 largest AI companies in the world are based in California. Governor Newsom emphasized that California must take on a pioneering role as a technology hub in order to anticipate the lack of national regulation to date.

SB 53 brings benefits for companies that use AI

The transparency obligations create crucial sources of information, particularly for user companies.

  • Model purpose, data sources and training methods: Companies can better understand what a model was developed for, what data it is based on and what its limitations are.
  • Security and risk measures: Disclosed security concepts make it easier to check whether data protection, ethics and security standards are being complied with.
  • Incident history and reporting channels: Documented incidents make it possible to adapt your own incident response processes.


This information is an important building block for internal AI approval procedures. Projects can be evaluated using risk scores, monitoring requirements can be defined and approvals can be systematically documented. SB 53 therefore not only creates obligations, but also increases the Transparency along the entire supply chain.

Industry reactions and political debate on SB 53

Opinions on SB 53 are divided.

Supporters:
Anthropic co-founder Jack Clark praised the law as a balanced framework that combines security and innovation. OpenAI spokesperson Jamie Radice saw it as an important step towards harmonization with future federal laws.

Critics:
Corporations and associations such as the Consumer Technology Association warned against a patchwork of state laws. Meta-Policy head Brian Rice called for uniform federal regulation and warned against obstacles to innovation.

At the same time, senators such as Josh Hawley and Richard Blumenthal are working on a national law that provides for a federal authority for AI evaluation. There are also voices in the House of Representatives in favor of uniform standards to avoid fragmentation at state level.

International significance of AI regulation SB 53

The signal effect of SB 53 reaches far beyond California. The state is home to many of the world's leading AI labs, including OpenAI, Anthropic, Google DeepMind, Meta AI and Nvidia. The new transparency and risk requirements could therefore shape international standards.

Pressure is also growing at a global level: at the UN General Assembly, heads of state and government spoke about the opportunities and dangers of AI. While Donald Trump described AI as "one of the greatest achievements", Ukrainian President Volodymyr Zelensky warned against a "destructive arms race".

Consequences and recommendations for companies

SB 53 is a clear signal for companies to professionalize their AI risk management. The providers' new transparency reports should be used directly. The key steps are:

  • Comprehensive risk analysis: systematically record risks - from loss of control to misuse - and evaluate them using standardized risk scores.
  • Documentation and TransparencyReports on security, data sources and performance limits facilitate audits and create trust. Experience from data protection directories (RoPA) can be transferred.
  • Incident management: Early reporting channels and incident response processes ensure responsiveness.
  • Governance and whistleblower protection: Clear responsibilities, regular audits and an open error culture reduce liability risks.
  • Technical control measures: Monitoring, bias tests, red team analyses and "human-in-the-loop" approaches prevent undesirable developments.


Integration into the approval process: The reports provided by providers can be used for internal audits and approvals and are traceable for management and supervisory authorities.

How Ailance supports AI governance

As a platform for data protection and risk management, Ailance is expanding its focus on AI governance. Companies benefit from:

  • Centralized risk and incident workflows for structured AI risk assessments.
  • Model documentation ("Model Cards") with information on purpose, data origin, versions, metrics and limits.
  • Dashboards and reporting functionswhich provide those responsible with a quick overview of risks, measures and reporting obligations.


In addition, Ailance supports companies in integrating the new disclosure requirements directly into their own approval processes: with templates for risk analyses, automated workflows and integrated approval modules.

Reading tip: Manage all AI projects centrally, audit-proof and legally compliant with Ailance AI Governance

AI brings progress, but also responsibility

With SB 53, California is setting a new standard for Transparency and safety in AI. The law shows that regulation and innovation are not mutually exclusive. However, AI companies are now obliged to take the issue of risk management seriously.

In view of the growing number of global AI laws, it is advisable for affected companies to establish a proactive compliance strategy at an early stage. Structured assessment models, clear responsibilities and transparent Documentation make the use of AI not only safer, but also a real competitive advantage.

Source: Press release from Governor Gavin Newsom on the signing of SB 53

Marcus Belke is CEO of 2B Advice and a lawyer and IT expert for Data protection and digital Compliance. He writes regularly about AI governance, GDPR compliance and risk management. You can find out more about him on his Author profile page.

Tags:
Share this post :