
The EU AI Act is in force. If your organization develops, deploys, or uses AI systems in or to the European Union, obligations are already in effect — and the most significant deadline is five months away.
Most coverage of the Act focuses on what is prohibited. That is not where most organizations need to spend their time. The more important question is: what applies to you, and what do you need to have in place before August?
The timeline, confirmed
The first wave of obligations took effect on February 2, 2025, prohibiting certain uses of AI and establishing AI literacy requirements. Since then:
- February 2, 2025 — Prohibited AI practices banned. AI literacy requirements (Article 4) take effect. Organizations using AI must ensure employees involved in AI planning, deployment, or operation receive appropriate training.
- August 2, 2025 — GPAI model obligations begin. The penalty regime also came into force on this date, meaning competent authorities can now impose administrative fines for non-compliance. The EU AI Office became operational.
- August 2, 2026 — The majority of rules come into force and enforcement starts. High-risk AI system rules under Annex III enter into application. Transparency rules begin to apply. Enforcement starts at national and EU level.
- August 2, 2027 — Rules for high-risk AI embedded in regulated products apply. This covers AI integrated into medical devices, machinery, vehicles, and similar products.
The fines are not theoretical. Non-compliance can result in up to 35 million euros or 7 percent of global annual turnover for prohibited AI practice violations, up to 15 million euros or 3 percent for other obligation failures, and up to 7.5 million euros or 1 percent for providing misleading information to authorities.
What the risk tiers actually mean
The Act classifies AI systems into four categories. Where your systems land determines what you have to do.
Unacceptable risk — banned. Real-time biometric surveillance in public spaces, social scoring by governments, AI that exploits psychological vulnerabilities to manipulate behavior. If your systems fall here, they cannot operate.
High risk — full compliance required. This is where most enterprise compliance teams need to focus. High-risk AI covers systems used in hiring and workforce management, credit scoring, critical infrastructure, education, healthcare, and law enforcement. These systems require conformity assessments, technical documentation, human oversight mechanisms, risk management systems, and registration in an EU database before deployment.
Limited risk — transparency only. Chatbots must disclose they are AI. Deepfake content must be labeled. The obligations are narrow but enforceable.
Minimal risk — no obligations. Most general productivity AI tools fall here. The Act does not require action, though internal governance is still good practice.
The part most organizations are missing
The Act applies based on where AI is used, not where it is built. A US-based vendor selling an AI hiring tool used by your EU operations is your compliance problem — not just theirs.
Companies have different sets of responsibilities depending on whether they develop, use, or import an AI system. As a deployer of third-party AI, you are responsible for ensuring those systems meet the Act's requirements for human oversight, transparency, and record-keeping. Your vendor cannot transfer that obligation to you by contract alone — you own it.
This has direct consequences for procurement. Before you renew or sign contracts for any AI tool used in high-risk contexts, you need to confirm that your vendor can provide conformity documentation, technical specifications, and evidence of ongoing monitoring. If they cannot, you are carrying the risk.
What your team should be doing right now
You have five months before August enforcement begins. That is enough time to get organized, but not enough time to start from scratch.
1. Inventory every AI system in use. You cannot assess risk you have not mapped. This includes tools deployed by individual teams outside of IT, which is where most shadow AI exposure lives. A complete inventory covers who owns each system, what decisions it informs, and whether those decisions affect EU residents.
2. Classify by risk tier. Not every AI tool requires the same treatment. Work through the Annex III criteria to identify which systems qualify as high-risk. The European Commission has published guidance to support this classification work, and it is worth reading before drawing conclusions.
3. Audit your vendor contracts. By August 2, 2026, conformity assessments should be completed, technical documentation finalized, and EU database registration for high-risk systems completed. Your vendors need to support that process. If your current contracts do not require them to provide this documentation, renegotiate before renewal.
4. Implement AI literacy training. This is already required under Article 4, which took effect in February 2025. If you have not addressed it, you are currently non-compliant. Training does not need to be elaborate — it needs to be documented, role-appropriate, and defensible to an auditor.
5. Assign ownership. Compliance without accountability fails. Someone inside your organization needs to own AI governance — with the authority to pause deployments, require documentation, and escalate to leadership. Without that, every other step on this list stalls when it hits organizational friction.
A note on scope
The Act applies to organizations that place AI systems on the EU market or put them into service within the EU — regardless of where the organization is headquartered. If you serve EU customers, employ EU staff, or operate EU infrastructure using AI, the Act likely applies to you. The territorial scope is broad and intentionally so.
Revoya builds AI governance programs that satisfy regulatory requirements without slowing down your operations. If you want a structured assessment of your EU AI Act exposure before the August deadline, book a discovery call.
