Marcus Belke
CEO of 2B Advice GmbH, driving innovation in privacy compliance and risk management and leading the development of Ailance, the next-generation compliance platform.
Artificial intelligence is quietly finding its way into software as an update, as part of regular version updates. For data protection and compliance experts, this means that something fundamental is happening without the established release processes taking effect. SMEs and large corporations alike are therefore feeling uneasy: what new data flows and risks does this AI update entail? Do the original governance processes for releasing the software still apply? These questions mark a blind spot in many organizations, as AI often comes in through the back door. What companies should look out for with AI updates.
Governance blindspot: AI update without approval
The introduction of software usually involves extensive checks: Data protection impact assessments, IT security audits, approvals by committees. But what happens when software that has been in use for years suddenly receives new AI functions via an update? The original checks do not cover these subsequent changes. The original purpose and data flows can change without anyone realizing that the original release no longer matches the current range of functions. This creates a governance blindspot: the system is running productively and is considered „approved“ internally, while the new AI components are completely out of sight of those responsible.
This blind spot is dangerous. An AI function that was never originally approved suddenly changes the nature of the application. Risk profiles shift, but because there is no new project, there is no impetus for a new review. This means that changes in purpose can go unnoticed and with them potential violations of Data protection or Compliance, which were never consciously weighed up. Without Transparency and updating the Documentation AI remains literally invisible in running systems.
What the new AI features will actually change
New AI features in existing tools - that often sounds harmless. It could be an assistance mode, automatic analyses or smart recommendations. However, they actually bring about very specific changes:
- Automated analyses instead of manual processes: Suddenly, the software interprets data independently. A CRM system could, for example, automatically analyze customer sentiment or make „smart“ predictions where previously only data was recorded. Decisions are thus partially automated, with all the advantages and disadvantages.
- New data flows into the Cloud: AI functions often only work with the help of external computing power or models. The result: data that previously remained internal now migrates to the Processing on Server of the provider or in the Cloud. A current example is Microsoft's Outlook app, which sends all emails to central Microsoft servers for AI analysis after an update. Sensitive content leaves the user's control unnoticed to be „enchanted“ by the AI. In many systems, such cloud uploads happen in the background without users or admins noticing immediately. The black box effects are pre-programmed.
- Changed purpose of data processing: New AI functions often use existing data for a different purpose than originally intended. For example, a tool that was primarily used to store documents could now also use AI to carry out content analyses or sentiment evaluations. This changes the purpose of the data processing, which is highly relevant in terms of data protection law (keyword: change of purpose).
- Lack of Transparency for users: These features are often active by default or are at least offered prominently without clearly communicating where data flows to or how the AI works. Users see the added value („finally summaries of long emails“), but have little insight into what happens to their data in the background. Complex opt-out settings overwhelm many users and the default settings encourage maximum data sharing.
New AI functions are changing the rules of the game. Local applications are becoming distributed, cloud-based systems. Data that previously remained in-house is suddenly leaving the secure castle. And static tools are becoming learning systems that harbor completely different risks than the original software.
Why traditional governance processes fail with AI updates
In companies, traditional governance and compliance processes are usually project-oriented: they come into play when new software is procured or a major upgrade is set up as a project. However, an update is not formally a new project. It is either provided by the manufacturer as an auto-update or installed as a routine patch by the IT department. No concept is written for version 3.5.1 and no steering committee meets for a new menu button.
As a result, AI changes via updates often fall through the cracks. The procurement department is not involved, as nothing new is purchased, and IT governance, such as change advisory boards, treats them as a technical patch with no strategic significance. Data protection and Compliance often only find out about it when something goes wrong. The existing approval processes are blind to incremental changes.
Another aspect is that even if the IT department reads the update notes, technicians do not always recognize the compliance implications. A changelog announcing „AI-powered insights” may trigger technical delight in IT. But who translates this into data protection risks? There is often no clear assignment of who should sound the alarm when new functions are introduced. This creates a gap between technical maintenance and organizational governance.
In addition, software providers are increasingly relying on SaaS models where updates are continuously installed. Companies often receive new features automatically, whether they want them or not. If governance processes are not agile enough, they lag behind the changes. Outdated review cycles such as an annual review do not capture this dynamic. For example, the World Economic Forum recently showed that the greatest operational risks of AI do not arise when it is first implemented, but at a later point in time. When systems change or interact with others. Rigid governance cycles can hardly capture these shifts.
Data protection and AI governance risks
Unnoticed AI updates harbor tangible risks in terms of data protection and Compliance:
- Change of purpose and lack of legal basis: When personal data are suddenly used for a new AI purpose, the question of the legal basis arises. The original Consent or contractual agreement may not cover the new use. For example, the service was purchased for X, but now also does Y with the data. This could be a change of purpose that is prohibited under Art. 6 para. 4 GDPR requires verification. Without new legitimization, the Processing on thin ice.
- Automated decisions: New AI functions can penetrate into areas covered by Art. 22 GDPR (Automated decisions). An update could, for example, introduce automated scoring that influences certain user decisions (such as automatic applicant pre-selection in HR software). However, such automated influences require special care, Transparency and, if necessary, options for objection. All this may not have been an issue when it was originally introduced, but it certainly is now.
- Data outflow and cloud processing: AI features harbor the risk of unintentional data export. This can personal data to third countries (keyword Cloud in the USA), which would require additional GDPR (e.g. Transfer Impact Assessments, Standard contractual clauses) is required. If nobody knows about it, such requirements are of course not met. Confidential data (trade secrets, confidential customer data) could end up on external servers - a nightmare for data protection officers.
- Bias and misanalysis: AI functions harbor new content-related risks, such as bias or incorrect results. What if the automated analysis shows discriminatory tendencies or makes gross errors? Initially, the software may have been „just a tool“, but now it makes preliminary decisions. The consequence: the company is responsible for this. Without a reassessment of the risks, you run the risk of unknowingly violating equal treatment principles or due diligence obligations.
AI regulation and the consequences
The AI Regulation (EU AI Act) brings new obligations into play. It classifies AI systems according to risk and requires strict measures such as risk management for high-risk AI, Documentation or human supervision. Whether a system is classified as „high risk“ depends on its intended use.
An existing tool could suddenly slip into a higher risk class as a result of an AI update. An example of this would be a personnel management tool that pre-sorts CVs using AI following an AI update. This would fall under the category of „use of AI in personnel decisions“. This would actually require compliance measures under the AI Regulation, but no one has noticed this yet.
The AI regulation also provides for severe penalties in the event of violations: up to 35 million euros or seven percent of global turnover.s for violations. These figures illustrate how critical untested AI features can become.
Reading tip: Getting to grips with AI regulation with Ailance AI Governance
Shadow AI: unclear responsibility during operation
Why do such AI updates go unnoticed at all? One core problem is the lack of clear responsibility during ongoing operations. Once software has been introduced and released, no one often feels explicitly responsible for fundamental reassessments.
Although there is typically an application owner or process owner for the system, their focus is often on functionality and benefits in the specialist area and less on Compliance. The IT department keeps the system running technically and maintains updates, but does not see itself as a data protection officer. Data protection officers and compliance teams, on the other hand, usually focus on new projects and major changes as their resources are limited. So it's no wonder that nobody has a seemingly small feature update on their radar.
The result is a kind of vacuum of responsibility: an AI assistant goes live, but who should have assessed it? The specialist department? The IT department? Data protection? Everyone unconsciously thought that someone else was already keeping an eye on it. No one is formally tasked with checking changes for risks after implementation. There is no chain of responsibility for operations. While new purchases have clear Responsible persons are named, there is often no similarly clear regulation as to who has to sound the alarm for feature X in version Y.
This is exactly what supervisory authorities such as BaFin have long been demanding for the financial sector: responsibilities must be clearly assigned and AI risks must be managed on an ongoing basis. However, many companies have not yet established internal roles such as an „AI officer per system“. Without such a governance owner, AI quickly remains ownerless in operations and risks remain unmanaged.
Necessary change of perspective: governance must include AI updates
In view of these developments, companies need a change of perspective with regard to their governance. It is no longer enough to apply governance only to major projects or acquisitions. The focus must also be on the ongoing use and further development of tools that have already been introduced.
What does that mean in concrete terms? „Governance by design“ must not end on the day production starts. Rather, there must be mechanisms that continuously or at least regularly check whether a system is still running in the green zone. Updates - regardless of whether they are small patches or major version jumps - should trigger a defined process. For example, a quick check should be carried out to determine whether new functions are relevant to data protection or security. If so, they must be incorporated into the existing compliance processes (e.g. addendum to the DPIA, update of the Documentation, etc.).
Companies need to proactively manage their tool landscape. This includes keeping an eye on the software providers' roadmaps: Are AI features planned on the product roadmap? Are there any beta programs that should be evaluated? This makes it possible to assess at an early stage which changes are imminent. Ideally, governance teams should remain in dialog with the manufacturers and obtain information about planned AI functions at an early stage.
Internally, too Awareness The question is: business departments and IT must be made aware that AI changes should not simply be dismissed as nice extra functions. Instead, it should be clear: Every major new function is an opportunity to pause for a moment and ask: „Are there any new risks or obligations here?“
This shift towards dynamic, continuous governance certainly requires new processes or tools. However, it is necessary in order to keep pace with the rapid development of technology. Agile methods do not only exist in software development, but we also need them in AI governance: away from selective audits and towards continuous monitoring and adaptation. This is the only way to mitigate the greatest risks.
Specific recommendations for AI updates
How can the risks described above be managed in practice? We conclude with some specific recommendations that every company - whether medium-sized or large - should implement:
- Define responsibilities per system: Appoint an owner for each important IT system who is not only responsible for the technology, but also explicitly for governance aspects. This person or committee is responsible for keeping an eye on changes, initiating risk assessments and acting as a link between the business department, IT and the IT department. Compliance to serve. Clear roles prevent a vacuum of responsibility.
- Introduce a process for monitoring updates: Establish a process that regularly checks for updates, e.g. a quarterly check of the release notes of important software or a subscription to manufacturer news. It is crucial that you briefly evaluate each upcoming update according to „relevant for data protection/governance - yes/no?“. Suspicious keywords such as AI, AI, machine learning, cloud service, analytics, etc. should automatically trigger a notification to the governance team.
- Define criteria for reassessment: Define company-wide when an update requires a reassessment. For example: „Processes the update again personal data? Are there data transfers to new recipients? Is there a new purpose for the Processing? Are automated decisions being introduced?“ As soon as one of these criteria is met, the data protection officer must be informed and, if necessary, a Data protection impact assessment be updated. This list of criteria should be known and easy to apply.
- Feed new functions into the data protection, security and governance processes: Make sure that no feature goes live without first undergoing a compliance check. In practice, this could mean, for example, that IT notifies the data protection department of new feature toggles or modules before they are activated. Or the specialist department may only use a new AI function once the compliance teams have given their OK. This can be implemented in processes or technically (e.g. by deactivating default settings). It is important that data protection and security are automatically at the table when new features are introduced.
Automate the AI update process with Ailance AI Governance
Consider using a governance platform such as „Ailance AI Governance“ to effectively manage all of the above measures. Such tools offer, for example, model maps, automated workflows for risk assessments and approvals as well as integration into existing data protection processes.
Reading tip: That's why model cards are so important for documentation
Ailance AI Governance makes it possible to register any AI processing, Responsible persons and to automatically trigger data protection checks as soon as personal data are in play. Workflows enforce that no approval is given without complete information, and reminders ensure that regular re-audits take place. Such a platform can Transparency and eliminate the blind spot by making updates, risks and evidence centrally visible and controllable.
Ensure that AI is useful without losing control. Because in the end, AI should create value in the company and not represent an uncontrolled risk.
Marcus Belke is CEO of 2B Advice as well as a lawyer and IT expert for data protection and digital Compliance. He writes regularly about AI governance, GDPR compliance and risk management. You can find out more about him on his Author profile page.




