Marcus Belke
CEO of 2B Advice GmbH, driving innovation in privacy compliance and risk management and leading the development of Ailance, the next-generation compliance platform.
Against the backdrop of new regulations, it is clear that without structured AI governance, companies will not be able to fully demonstrate to supervisory authorities or internally that their AI applications are being operated responsibly and in compliance with the law. In the following, we present a practical 5-step plan that companies can use to bring AI governance to life. Each step highlights typical stumbling blocks and provides concrete recommendations for action, underpinned by best practices. The result is a holistic approach that enables AI projects to be managed in a controlled, transparent and compliant manner without stifling innovation.
Why AI governance is now indispensable
In the last two years, hardly any other technology has spread as explosively as AI. Generative AI models, predictive analyses and automated decisions are already a reality. They are often used in an uncontrolled manner alongside official processes. Internal guidelines alone are no longer enough to maintain control. Ever since the AI Regulation came into force, it has been clear that AI governance is no longer a voluntary exercise.
Without governance, massive business risks also arise: for example, through „shadow AI“ when employees use tools such as ChatGPT on their own authority and disclose confidential data in the process. Or through liability traps if generative AI is used without checking copyrights, data protection or bias. There is a risk of lawsuits and fines. Reputational damage is also an issue: AI that delivers incorrect or discriminatory results undermines the trust of customers and the public.
Added to this is the regulatory pressure: the AI regulation provides for severe penalties of up to 35 million euros or seven percent of annual global turnover. Important: Not only obvious high-risk applications are affected. Even seemingly banal automation can be regulated, for example if personal data processed or decisions with a significant impact are made. National supervisory authorities are already demanding more accountability: for example, the German BaFin requires clear responsibilities to be defined, discrimination by AI to be prevented and the IT security can be proven. And regardless of whether a machine makes „automated“ decisions. In short: AI governance is now critical to success in order to enable innovation without slipping into chaos or exposing oneself to legal risks.
Reading tip: AI governance and data protection - seamless integration of VVT and DSFA
Regulatory requirements and standards at a glance
The AI Regulation (EU AI Act) is a comprehensive law for the regulation of AI systems in Europe. It divides AI into risk classes and imposes strict requirements for high-risk AI. Companies must therefore check at an early stage whether their AI systems fall under the AI regulation and introduce appropriate control and governance structures. Most of the provisions will become binding after a transitional period from August 2026; some requirements, such as the ban on certain practices and the promotion of AI literacy, are already in force.
In the USA, the NIST (National Institute of Standards and Technology) has presented the AI Risk Management Framework (AI RMF), a guideline that can be used voluntarily. It helps to manage AI risks over the entire life cycle and to integrate trustworthiness criteria (such as traceability, fairness, security, etc.) into the design, development, testing and operation of AI systems from the outset. The NIST AI RMF serves as a best practice collection to strike a balance between innovation and responsibility. Many of its principles (e.g. risk analysis, transparency obligations, continuous monitoring) are also reflected in the requirements of the AI regulation.
Alongside legislation, standards help to establish best practices. ISO/IEC 42001 (published at the end of 2023) is the first international standard for AI management systems. It provides a structured framework for using AI systematically and responsibly. The standard addresses the specific challenges of AI and enables organizations to manage the opportunities and risks of AI through a management system. With ISO 42001, companies can prove that they are using AI in a controlled manner „by design“ and thus also comply with future regulations or certifications.
Regardless of region or industry, it is becoming clear that responsible AI is becoming a competitive factor. Companies can embed compliance by design and thus protect themselves legally, technically and reputationally. Investing in AI governance not only pays off by avoiding fines, but also makes AI projects more efficient as risks are identified and addressed at an early stage.
From AI literacy to re-audit: a pragmatic implementation roadmap
But how can AI governance be implemented in practice? A proven approach is a step-by-step procedure that starts with Awareness and strategy and leads to a continuous improvement cycle. The following roadmap has proven to be helpful in companies:
1. build AI competence and set the framework
It all starts with AI literacy, i.e. raising awareness and basic knowledge of AI risks and opportunities within the company. Training for development teams, specialist departments and decision-makers forms the basis for using AI responsibly. At the same time, a governance framework should be defined: An AI policy or guardrails that define which AI applications are desirable, which are prohibited and what approval processes look like. Roles and responsibilities are clarified, e.g. who keeps an overview as an AI officer or in an AI committee. Without this cultural and organizational foundation, even the best tools will come to nothing.
2. create AI inventory and classify risks
Next up is Transparency about where AI is used everywhere. Many companies already have more AI-based tools and scripts than the management is aware of. A central AI inventory helps to record all AI use cases and models - from piloting to the productive system. It is important that external AI services (e.g. genAI APIs) or RPA bots are also listed. Each AI application recorded should be assigned to a risk class. This assessment, ideally based on a questionnaire, determines the further testing and approval effort. Modern platforms such as Ailance support this by automatically querying important metadata when a new AI use case is recorded and making an initial risk classification. This creates a dynamic register of all AI projects, which serves as the basis for all further governance steps.
3. anchoring risk and compliance checks in workflows
Graduated review processes are now defined based on the risk classification. The aim is a risk-based workflow: high-risk use cases, for example, require additional approvals by Compliance or IT security while low-risk projects are allowed to go through a simplified process. A central element is the Data protection impact assessment (DSFA): As soon as a use case personal data a DSFA process should be started automatically. Ailance has already integrated this logic. The tool automatically triggers a DPIA trigger when a use case with personal data is registered in the inventory. For high-risk applications, further check steps can be enforced before approval is granted. It is crucial that all these approval and review processes are digitalized and workflow-supported. A governance tool ensures that no step is skipped: Approvals are only granted once all the necessary checks have been completed. Escalations take effect if deadlines are exceeded. Audit trails also record every decision with a time stamp. In this way, governance is „built into the process“.
4. create technical documentation (model cards & datasheets):
In parallel to the tests, a comprehensive test plan must be drawn up for each AI application. Documentation be created. Model cards have established themselves as best practice for this. A model card is basically the profile of an AI model. It documents the purpose and use case, the training and test data used (including origin and categories), the model architecture and version, performance metrics, known limitations and bias risks as well as the responsible persons and approvals. Particularly in view of the AI Regulation, which requires detailed technical documentation for high-risk systems, model maps are an efficient way of keeping all evidence in one place. During internal or external audits, it is possible to see at a glance how a model was developed, what data it uses and when it was last validated or bias-tested.
In addition to model cards, datasheets for data records are also gaining in importance. They work in a similar way, but focus on the data used. A datasheet records where a dataset comes from, how it was collected or processed, what quality it has, whether there are any gaps and what it may (or may not) be used for. Together, model maps and datasheets provide Transparency. Knowledge that was previously spread across many heads and documents is now bundled centrally.
In practice, it has proven useful to first define a minimum schema for model cards and to refine this with each new project. Ailance takes an innovative approach here: the platform automatically generates a model card as soon as a new AI use case is created in the inventory and fills in many mandatory fields with existing metadata. The specialist department must add any missing information, otherwise the system will not allow the model to be finally approved. This makes the Documentation integral part of the development process. Clever links ensure that the Model map always remains up-to-date. In Ailance, for example, each model is linked to the corresponding RoPA entry (the List of processing activities) so that it is immediately clear which processes and data are associated with it.
5. monitoring, audits and continuous improvement
Governance does not end even after the productive release of an AI application. It goes into ongoing monitoring and re-audit mode. AI models change and external requirements also evolve. The review cycles should therefore be defined at the time of release (e.g. annual re-audits for a model). Ailance supports this with a built-in reminder system for re-audits. Responsible persons automatically receive notifications when a model has reached its defined validity date and needs to be checked again. Changes to the model (e.g. new training data, model updates or changed hyperparameters) can also trigger partial releases. This means that certain changes immediately trigger a new check before the model can continue to be used. Modern dashboard functions provide managers with an overview at all times. How many AI systems are in use? How many of them have already been checked or approved? Which ones are considered high risk? Where are there outstanding reviews? Such key figures ensure Transparency in management and help to identify bottlenecks at an early stage.
These steps make it possible to implement pragmatic AI governance without stifling innovation. It is important to start small, for example by first inventorying and documenting the most important existing AI solutions and then scaling them step by step. Every phase of the AI lifecycle should be mapped and controllable: from the idea to development and operation. In this way, governance not only creates trust, but also ultimately accelerates the introduction of AI, as expectations and touchstones are defined from the outset.
Structuring AI projects intelligently with Ailance
With the right approach, AI governance can be effectively integrated into day-to-day business without compromising the dynamics of AI projects. Clear processes and responsibilities often speed up development cycles, as there is less need for ad hoc clarification and „governance by design” prevents unnecessary queries. Modern solutions such as Ailance facilitate this implementation by bundling all relevant building blocks such as AI inventory, risk workflows, DPIA integration, model maps, audit trails and dashboards in one platform and docking them to existing systems.
Would you like to find out what AI governance can look like in your company? We would be happy to show you in a live demo how you can use Ailance to structure your AI projects, manage risks and automate them for your organization. Compliance ensure. Lead your AI innovations to success safely and efficiently with Ailance. Contact us and take your AI governance to the next level.
Link tip: Automated control of all AI projects with Ailance AI Governance
Marcus Belke is CEO of 2B Advice and a lawyer and IT expert for Data protection and digital Compliance. He writes regularly about AI governance, GDPR compliance and risk management. You can find out more about him on his Author profile page.





