Regulatory

When GDPR Meets the EU AI Act: What the Overlap Means for Your Compliance Program

Two frameworks. Overlapping obligations. One program that has to satisfy both.

Mary Ajayi
Mary Ajayi·April 25, 2026
When GDPR Meets the EU AI Act: What the Overlap Means for Your Compliance Program

Most organizations that are working through EU AI Act compliance already have a GDPR program. Most of them are treating the two as separate workstreams. That is a mistake that is going to create duplicated effort, documentation gaps, and compliance exposure that neither program was designed to address alone.

The EU AI Act and the GDPR overlap in ways that are not accidental. They were designed to coexist, and in several areas they create interlocking obligations that compliance programs need to address together rather than in parallel. Understanding where they overlap, where they conflict, and what that means operationally is one of the most practical things a compliance leader can do right now.

This is not a technical mapping exercise. It is a practical guide to what the overlap means for your program.

Two Laws, Two Different Logics

Before getting into the overlap, it helps to understand what each law is actually trying to do, because they are built on different foundations.

The GDPR is a fundamental rights law. Its core purpose is protecting individuals' rights with respect to their personal data. It creates obligations around lawful basis, transparency, data minimization, individual rights, and accountability. The regulator is the data protection authority. The enforcement mechanism is fines based on turnover.

The EU AI Act is a product safety regulation. Its core purpose is ensuring that AI systems placed on the EU market or used in the EU are safe, transparent, and do not pose unacceptable risks. It creates obligations around system classification, technical documentation, human oversight, and conformity assessment. The regulator is the market surveillance authority, with data protection authorities involved in certain circumstances.

They are different regulatory logics applied to overlapping territory. An AI system that processes personal data to make decisions about individuals sits squarely in the middle of both. And most high-risk AI systems process personal data to make decisions about individuals.

Where the Obligations Collide

The overlap is most acute in four areas.

Transparency. GDPR Article 13 and 14 require that you inform individuals when their personal data is processed, including when automated decision-making is involved. EU AI Act Article 13 requires that high-risk AI systems be transparent and that users — the organizations deploying the system — receive information sufficient to interpret outputs and exercise oversight. Article 52 requires disclosure to individuals when they are interacting with an AI system.

These are different transparency obligations with different audiences, but they apply to the same system. Your compliance program needs to address both: what you tell the regulator and audit trail, what you tell the deploying organization, and what you tell the individual whose data is being processed. Many programs are handling one layer and missing the others.

Automated decision-making. GDPR Article 22 gives individuals the right not to be subject to solely automated decisions that produce significant effects, with limited exceptions. The EU AI Act classifies AI systems that make or substantially influence decisions in high-risk categories — credit, employment, essential services — as high-risk systems subject to conformity requirements.

The interaction is not straightforward. A system can be compliant with GDPR Article 22 exceptions — explicit consent, contractual necessity, authorized by law — while still triggering EU AI Act high-risk obligations. And an organization that relies on GDPR Article 22 compliance as the basis for deploying automated decision-making may not have adequately assessed whether the system also meets the AI Act's requirements for documentation, testing, and oversight.

Data governance. GDPR requires data minimization — you collect and process only what is necessary for the specified purpose. EU AI Act Annex IV requires that training, validation, and testing datasets be subject to appropriate data governance practices, including examination for bias. These are not the same requirement, but they apply to the same datasets.

If you are using personal data to train an AI model — even an internally developed one — you need a GDPR lawful basis for that processing, purpose limitation analysis, and a data governance framework that addresses bias and fairness. Most organizations have addressed one or the other but not both in the same documented framework.

DPIAs and fundamental rights impact assessments. GDPR Article 35 requires a Data Protection Impact Assessment for processing likely to result in high risk to individuals, which includes systematic and extensive automated profiling and large-scale processing of sensitive data. EU AI Act Article 9 requires that high-risk AI systems be subject to a risk management system throughout their lifecycle.

Both requirements apply to high-risk AI systems that process personal data. A DPIA that does not address AI Act risk dimensions is incomplete from an AI Act standpoint. An AI Act risk assessment that does not address GDPR risk dimensions leaves your DPIA exposure unaddressed. The practical answer is a unified assessment that satisfies both — but most organizations are running them separately, creating duplication and gaps simultaneously.

Where They Create Tension

The overlap creates genuine tension in a few areas that do not have clean resolutions.

Data retention vs. model performance. GDPR requires that personal data not be retained longer than necessary for its original purpose. Maintaining an AI system's performance often requires access to historical data for retraining and validation. The tension between GDPR retention limits and AI Act performance maintenance requirements is real, and it needs to be resolved in your data governance documentation with a specific legal basis and justification — not ignored.

Explainability and proprietary models. GDPR Article 22(3) requires that individuals have access to meaningful information about the logic of automated decisions affecting them. EU AI Act Article 13 requires that high-risk AI systems be interpretable by their deployers. When the system in question is a third-party model with proprietary architecture, your ability to satisfy either requirement depends on what the vendor will disclose. This is a vendor management problem as much as it is a compliance problem.

Right to erasure and training data. GDPR's right to erasure applies to personal data. For individuals whose data was used to train an AI model, erasure requests create a technical challenge that neither the GDPR nor the AI Act fully resolves. Whether you can satisfy a deletion request without retraining a model depends on the architecture. The obligation to honor the request does not wait for the technical solution to exist.

What This Means Operationally

The organizations that are handling the intersection well are not running two parallel compliance programs. They have built a single governance framework that addresses both laws at the system level.

That means starting with an AI system inventory that maps each system to its GDPR processing basis, its AI Act risk classification, and the intersection obligations that apply — transparency requirements, DPIA or risk assessment obligations, human oversight requirements, individual rights implications.

For high-risk AI systems that process personal data, it means a unified impact assessment that covers both GDPR and AI Act risk dimensions. Not two separate documents that reference each other. One assessment with a complete picture.

For vendor AI, it means procurement due diligence that asks both sets of questions. What data does the vendor process, under what terms, and with what retention policy? What is the model's architecture, and can they support GDPR explainability requirements? These are not two separate vendor questionnaires. They are one due diligence process.

And for transparency obligations, it means a disclosure framework that maps what you communicate to regulators, to deployers within your organization, and to individuals — with the specific legal basis and AI Act classification for each disclosure layer documented.

Where to Start

If your organization has active GDPR compliance infrastructure and is beginning EU AI Act compliance, the starting point is a mapping exercise, not a new program.

Take your existing high-risk processing activities — automated decisions, large-scale profiling, sensitive data processing — and map them to your AI Act risk classification. Most will align to high-risk categories. For each one, identify where you have a DPIA, whether that DPIA covers AI Act risk dimensions, what transparency disclosures are in place, and whether your vendor contracts address both frameworks.

The gap list that emerges from that exercise is your AI governance roadmap. It is almost certainly shorter than building two separate programs from scratch, and it addresses the intersection obligations that neither program would catch on its own.

The regulatory direction is toward more coordination between data protection and AI Act enforcement, not less. The organizations that have built unified programs are better positioned for that future than the ones that have kept the workstreams separate.


If you want help mapping your existing GDPR program to EU AI Act obligations, our AI Governance Maturity Framework includes a regulatory mapping module that covers both frameworks. Or reach out at hello@revoya.ai to talk through what a unified compliance approach looks like for your organization.

Tags

GDPRPrivacyLaw and RegulationEnforcementFrameworks and StandardsLegalProgram ManagementGovernment