Marcus Belke
CEO of 2B Advice GmbH, driving innovation in privacy compliance and risk management and leading the development of Ailance, the next-generation compliance platform.
Artificial intelligence has long been part of everyday business life. The areas of application range from automation and decision support to new business models. However, while its use is increasing rapidly, the development of structures for responsible AI and its governance is lagging behind. Several recent studies clearly show how companies benefit from AI, but at the same time lack trust due to a lack of governance, Compliance and security.
AI as an accelerator and risk
The vast majority of companies already use AI, but only a tiny number have mature governance for responsible AI. According to a current McKinsey study from January 2025 Although almost all companies are investing in AI, only 1 % of AI implementation is described as "mature". This means that AI is integrated into work processes and therefore has a measurably positive impact on economic success.
Similar results are shown by the "Trustmarque AI Governance Report" from July 202593 % of the companies surveyed use AI in at least some form, but only 7 % have established a fully embedded AI governance framework. Accordingly, Trustmarque describes the discrepancy as "a clear divide" between widespread AI use and lack of governance.
This governance gap is also reflected in the lack of integration into development processes. According to Trustmarque, only 8 % of companies have integrated AI governance into their software development cycle. Traditional processes and tools have often not been adapted to AI-specific risks. For example, less than a third of companies use bias testing or model explainability testing. In many cases, there is also a lack of infrastructure and monitoring: only 4 % of companies consider their IT environment to be ready for AI at scale.
Source: Trustmarque AI Governance Report (published on July 21, 2025)
Lack of guidelines for responsible AI
The governance deficits start with the basics. Thus a survey by OneTrust/OCEGthat 62 % of the companies surveyed do not have a documented governance plan and 58 % do not know which AI systems are in use. It is therefore hardly surprising that 70 % of respondents have little confidence in their risk strategies.
Similarly sobering: According to a report by NTT Data, 72 % of the managers surveyed admit that they have not formulated a company-wide AI policy for responsible use. At the same time, a majority of managers admit to knowledge gaps in dealing with AI. 67 % say their employees lack the necessary skills to work effectively with AI.
These findings point to significant shortcomings in governance and training that need to be addressed in order to use AI safely and successfully.
Responsible AI: Fragmented responsibilities
The consequence of these gaps is fragmented responsibility. In many companies, AI governance exists only in silos of individual departments. As Trustmarque found out, only 9 % of companies have achieved alignment between IT leadership and the governance team, while in 19 % there is no clearly designated person responsible for AI governance.
In most cases, oversight of AI remains at departmental level instead of being strategically anchored across departments. Only 20 % of organizations have even established a cross-functional AI governance body. This isolated approach makes consistent, effective governance difficult and carries the risk that important perspectives - for example from the areas of law, ethics or human resources - are not sufficiently taken into account.
Management task: modernizing AI governance
Many organizations are now realizing that traditional governance methods cannot keep up with the pace of AI development. According to OneTrust's "The 2025 AI-Ready Governance Report", those responsible are spending 37 % more time managing AI risks than in the previous year. This is an indication of how much of a burden AI is placing on existing compliance and risk management processes.
Source: OneTrust "The 2025 AI-Ready Governance Report" (published on September 9, 2025)
Accordingly, experts are calling for a change of direction at the highest level. The NTT Data Report describes leadership as the "missing link" for responsible AI. Over 80 % of managers believe that leadership, governance and workforce development are currently unable to keep pace with the progress of AI. A culture- and management-driven change is therefore needed.
For company management, this means making AI governance a top priority. Experts advise this in the NTT Data Report, responsible Anchoring AI from the outset ("by design") and establishing cross-divisional governance structures. This includes proactively assessing AI risks and implementing suitable controls. And not just shortly before the go-live, but throughout the entire AI lifecycle. Managers must also ensure that employees have the necessary skills to work safely and effectively with AI technologies. Clear responsibilities, guidelines and training can create a culture of trustworthy AI where innovation and control go hand in hand.
Conclusion: Governance as a success factor in the age of AI
The figures paint a clear picture: AI is already omnipresent in companies, but organizational precautions are not keeping pace. Many companies are operating in a dangerous gray area. Although they benefit from AI, they are putting their reputation and Compliance at risk due to a lack of guardrails. In order to exploit the full potential of AI without falling into crises of trust or regulatory traps, companies must act now and modernize their governance:
- Develop guidelinesEstablish clear policies for the responsible use of AI.
- Clarify responsibilitiesCreate governance structures that work across divisions.
- Build competenciesTrain employees in the safe use of AI.
- Managing risksRecord inventory of all AI systems, continuously monitor risks.
- Take the leadAnchoring governance as a strategic topic in top management.
Companies that do this homework will have a competitive advantage. They can use AI to create trustworthy value, while others may be slowed down by mishaps and loss of trust. The motto is therefore to invest in governance now in order to shape the AI revolution sustainably and responsibly.
Your next step: Ailance AI governance
Use the full potential of artificial intelligence without any governance gaps. With Ailance AI Governance, you can create clear structures, documented processes and transparent responsibilities. This allows you to manage AI initiatives effectively, reduce risks and meet regulatory requirements.
Talk to our experts and find out how Ailance can help your company use AI responsibly and profitably.
Marcus Belke is CEO of 2B Advice and a lawyer and IT expert for Data protection and digital Compliance. He writes regularly about AI governance, GDPR compliance and risk management. You can find out more about him on his Author profile page.





