August 2, 2026 is the single most consequential date in the EU AI Act enforcement calendar. On that date, obligations for high-risk AI systems become fully enforceable — and penalties reach €35 million or 7% of worldwide annual turnover for violations. With roughly 100 days remaining, most organizations using AI in regulated functions haven't completed a compliance assessment. Many don't yet know they're in scope.
This post explains who is affected, what compliance actually requires, and what to do in the time remaining.
What the August 2 Deadline Actually Covers
The EU AI Act uses a risk-tiered framework. The August deadline activates obligations for Annex III high-risk AI systems — a specific category that covers AI deployed in eight domains:
1. Biometric identification and categorization
2. Critical infrastructure management (energy, water, transport)
3. Educational or vocational training decisions
4. Employment, worker management, and access to self-employment
5. Access to and enjoyment of essential private services and benefits (including credit scoring)
6. Law enforcement
7. Migration, asylum, and border control management
8. Administration of justice and democratic processes
If your organization uses AI to assist hiring decisions, determine creditworthiness, flag fraud patterns, route customer service, score employee performance, or make eligibility determinations of any kind — and any part of that process touches EU residents — you need to assess your Annex III exposure before August 2.
The August deadline also activates Article 50 transparency obligations for all AI systems, regardless of risk tier. Chatbots must disclose their artificial nature. Deepfake content must carry machine-readable watermarks. Biometric categorization systems face disclosure mandates. These apply broadly, not just to high-risk deployments.
The Scope Question Most Companies Are Getting Wrong
Many organizations are incorrectly assuming the EU AI Act only applies to companies headquartered in the EU. It does not. The Act applies extraterritorially to any organization that:
- Places AI systems on the EU market (including via SaaS)
- Uses AI systems that affect EU residents, regardless of where the system is hosted
- Is a provider whose AI outputs are used within the EU by a deployer
A U.S. company using AI to screen job applicants from Europe, or a fintech processing EU customer loan applications through an AI decisioning model, is in scope. The enforcement infrastructure — national AI supervisory authorities in each EU member state — is operational and issuing its first penalties in 2026.
What High-Risk AI Compliance Actually Requires
Compliance for Annex III systems is not a checkbox. It requires building and maintaining a structured governance program across six areas:
1. Risk management system. A documented, continuous process for identifying, analyzing, and mitigating risks specific to the AI system throughout its entire lifecycle — not just at deployment.
2. Data governance. Training, validation, and testing data must be subject to documented data governance practices. This includes bias assessment — you need to be able to demonstrate that your training data doesn't systematically disadvantage protected groups.
3. Technical documentation. Before placing a high-risk system in operation, you must produce documentation that covers the system's design, development process, performance metrics, intended purpose, and known limitations. Think of it as a technical due diligence package that regulators can audit.
4. Transparency and information for users. High-risk AI systems must come with documentation for human operators — clear information about the system's capabilities, limitations, and the circumstances under which human oversight is required. AI cannot be a black box to the people responsible for its outputs.
5. Human oversight. High-risk systems must be designed to allow human operators to monitor, intervene, and override AI outputs. This is not a passive requirement — you need to demonstrate that oversight mechanisms are operational and that operators are trained to use them.
6. Accuracy, robustness, and cybersecurity. Systems must meet documented performance standards and be resilient against adversarial attacks, including prompt injection and model poisoning. This is where AI governance and AI security directly converge.
The Fines Are Not Theoretical
The EU AI Act establishes three penalty tiers:
- €35M or 7% of global revenue — prohibited AI practices (real-time biometric surveillance, social scoring)
- €15M or 3% of global revenue — non-compliance with high-risk system obligations
- €7.5M or 1.5% of global revenue — incorrect or misleading information provided to authorities
National supervisory authorities began issuing the first administrative penalties in Q1 2026. Germany's essential entity registration deadline passed in April 2026. The Netherlands requires completed self-assessments by June 2026. This is not a grace period — enforcement is active.
Your 100-Day Action Plan
The organizations that will be in the strongest position by August 2 are those that treat the next 100 days as a structured sprint, not a continuous monitoring exercise. Here's a realistic sequence:
Days 1–20: Scope and inventory. Identify every AI system your organization uses or provides. For each, assess whether it falls under any Annex III category. Don't just look at systems your engineering team built — include AI capabilities embedded in third-party SaaS tools you've deployed. Many HR, CRM, and financial platforms now include AI decisioning features that may be in scope.
Days 21–45: Gap assessment. For each in-scope system, assess your current state against the six compliance requirements above. Where do you have documentation? Where are the gaps? This assessment becomes the basis for your remediation plan and your defensible record if you're audited.
Days 46–75: Remediation. Close the highest-priority gaps. For most organizations, the critical gaps are in technical documentation, bias assessment, and human oversight processes — not in the technology itself. These are governance and process problems, not engineering problems.
Days 76–100: Register, document, and prepare for audit. High-risk AI systems must be registered in the EU database before deployment. Finalize your technical documentation, implement your risk management system, and ensure your human oversight processes are trained and operational.
The Intersection with Cybersecurity
One requirement that surprises organizations is how deeply the EU AI Act's cybersecurity mandate connects with your existing security program. Article 15 requires that high-risk AI systems be "resilient against attempts by unauthorized third parties to alter their use, outputs, or performance." This means your AI governance program needs to interface directly with your security team — threat modeling for AI systems, adversarial testing, and incident response procedures specific to AI are all required, not optional.
Organizations that treat EU AI Act compliance as a legal checkbox will fail the cybersecurity requirement. It needs to be a cross-functional program with security embedded from the start.