EU AI Act High-Risk AI Systems: What Enterprises Must Do Before August 2026
The EU AI Act obligations for high-risk AI systems apply from August 2, 2026. Article 9 requires a documented risk management system. Article 12 requires automatic record-keeping. Article 13 requires transparency to deployers. Article 14 requires human oversight. Enterprises deploying high-risk AI systems need enforcement and audit infrastructure in place before that date.
Written by Parminder Singh, Founder and CEO, DeepInspect. Last reviewed: April 24, 2026.
The EU AI Act obligations for high-risk AI systems apply from August 2, 2026. Article 9 requires a documented risk management system. Article 12 requires automatic record-keeping. Article 13 requires transparency to deployers. Article 14 requires human oversight. Enterprises that deploy high-risk AI systems need enforcement and audit infrastructure in place before that date.
Which AI systems count as "high-risk"?
The EU AI Act (Regulation (EU) 2024/1689) classifies AI into four risk tiers: unacceptable (prohibited), high-risk, limited-risk, and minimal-risk. The high-risk category is defined in Annex III and covers eight domains.
- Biometric identification and categorization
- Critical infrastructure (water, gas, electricity, transport)
- Education and vocational training (admission, grading, cheating detection)
- Employment and worker management (hiring, promotion, task allocation)
- Access to essential private and public services (credit scoring, insurance, social benefits, emergency dispatch)
- Law enforcement
- Migration, asylum, and border control
- Administration of justice and democratic processes
An AI system used in one of these domains by an enterprise that offers services into the EU triggers high-risk obligations, regardless of where the enterprise is headquartered. The Act is extraterritorial in the same way GDPR is.
When do the obligations kick in?
The Act entered into force on August 1, 2024. The timeline after that:
- February 2, 2025. Prohibited practices take effect. Social scoring systems, manipulative AI, untargeted facial-image scraping.
- August 2, 2025. General-purpose AI model obligations take effect. Provider obligations for foundation models.
- August 2, 2026. High-risk AI system obligations take effect. The live enforcement deadline for most enterprises.
- August 2, 2027. Full application. Transitional provisions for high-risk systems already on the market expire.
August 2, 2026 is the relevant date for CISOs and compliance leads whose enterprises deploy AI into any of the Annex III domains. Penalties for non-compliance with the high-risk obligations reach €15M or 3% of global annual turnover, whichever is higher. Penalties for supplying incorrect information to authorities reach €7.5M or 1%.
What do Articles 9, 12, 13, and 14 require?
Four articles cover the operational spine of high-risk compliance on the deployer side.
Article 9: Risk management system. A continuous, iterative, documented process for identifying, analyzing, estimating, and mitigating risks to health, safety, and fundamental rights. The risk management lifecycle runs throughout the AI system's lifetime. For a deployer this reads as: you have a documented process, you run it on a regular cadence, and you can produce the output on request.
Article 12: Automatic record-keeping. High-risk AI systems must technically enable automatic recording of events over the lifetime of the system. Logs must be traceable to inputs, outputs, and operators. Retention depends on purpose and intended use. For AI traffic that flows through a gateway, this reads as: a per-request ledger with enough fidelity to reconstruct any decision after the fact.
Article 13: Transparency and information to deployers. The provider must supply information sufficient for deployers to interpret the output and use it appropriately. Instructions for use, characteristics, capabilities, limitations, performance, training data summary. For enterprises that deploy AI from third-party providers, this reads as: you receive and retain this documentation, and your controls reflect the limitations the provider discloses.
Article 14: Human oversight. Natural persons must be able to understand the system's capacities and limitations, monitor its operation, intervene or override when appropriate, and stop the system if needed. The oversight must be operational. A named operator with documented authority and a concrete intervention path into the request flow.
Article 10 (data governance) and Article 15 (accuracy, robustness, cybersecurity) apply in parallel. Most of what I cover on this blog focuses on enforcement and audit, where Articles 9, 12, and 14 concentrate their requirements on the deployer side.
What does it mean to be a "deployer" vs a "provider"?
The Act distinguishes sharply between the two roles.
A provider develops an AI system or has one developed and places it on the EU market. OpenAI is a provider of GPT-4. Anthropic is a provider of Claude. The provider carries the heavier obligations: conformity assessment, CE marking, registration in the EU database, post-market monitoring.
A deployer uses the AI system under its authority in the course of a professional activity. An enterprise that builds a hiring screener on top of an LLM is a deployer. An insurer that runs an LLM-backed claims assistant is a deployer. Article 26 lists the deployer obligations: use the system in line with instructions, ensure human oversight, monitor operation, keep logs generated by the system, inform affected individuals, cooperate with authorities.
Most enterprises occupy the deployer role. The deployer's compliance question is "did we meet our Chapter III Section 2 obligations." The provider's conformity assessment sits on the provider. The deployer's answer sits in the enforcement and audit plane.
What does the deployer's enforcement plane need to produce?
Working backwards from Article 12 and Article 14, five capabilities need to exist before August 2, 2026.
- Automatic per-request logging. Every call into the high-risk AI system lands in a durable record that carries identity, input, output, and decision path.
- Tamper-evident storage. The ledger is append-only with a per-record cryptographic signature, so any single record can be verified independently. Article 12's integrity expectation requires that guarantee. Mutable storage falls outside the category.
- Human oversight at the tool-governance layer. For AI systems with tool-calling or agentic capability, a named operator can review, approve, or block individual tool invocations the system attempts. The intervention surface lives in the request path itself, not in a separate after-the-fact dashboard. Oversight scope today is the tool-governance layer; broader response-content oversight follows the response-side enforcement roadmap below.
- Policy enforcement inline with traffic on the request side. When the request violates intended use (outside the provider's Article 13 disclosures) or crosses a risk threshold identified in the Article 9 process, the gateway blocks or flags before forwarding. Symmetric response-side enforcement on the model's reply is on the roadmap.
- Retention aligned to purpose. Logs retained for the period the use case requires, exportable in a format an EU authority or a national competent authority can read directly.
Missing any one of these at the August 2, 2026 deadline produces an exposure that scales with the enterprise's EU turnover.
How should an enterprise prepare before August 2, 2026?
Four workstreams, in the order I would run them.
Classify. List every AI system in production or planned through end-2026. For each, mark whether the use case falls into one of the eight Annex III domains. A "hiring assistant" that screens resumes is employment management. A "customer onboarding helper" that affects access to financial services is credit scoring. Borderline cases default to high-risk.
Map obligations. For each high-risk system, map the Chapter III Section 2 articles onto current controls. Identify gaps. Most enterprises find Article 12 (logging depth and integrity) and Article 14 (human oversight specificity) as the largest gaps.
Deploy enforcement inline. Stand up a gateway in front of each high-risk AI system that handles traffic in the EU. Run it in observation mode for two to four weeks to establish baseline. Turn on policy enforcement and audit commit for the target deadline.
Rehearse a regulator request. Simulate a national competent authority request for records. Export a week of ledger entries. Verify the export includes identity, input, output, policy version, and decision path for every request. A rehearsal before August 2026 surfaces gaps while there is still time to close them.
Where DeepInspect fits
DeepInspect runs as a transparent proxy in front of OpenAI, Anthropic, Azure OpenAI, Google Gemini, AWS Bedrock, and self-hosted endpoints, sitting between enterprise AI applications and the LLM. Gateway-level token verification at ingress, with end-user identity context propagated from the calling application's authenticated session. Deterministic request-side policy evaluation inline against Article 9 risk rules, applied to the prompt before it reaches the model; symmetric response-side enforcement is on the roadmap, with response content captured in the audit record today. Append-only forensic ledger with a per-record HMAC-SHA256 signature that satisfies Article 12 integrity expectations. Human-oversight hooks for Article 14 are scoped today to the tool-governance layer, where individual tool invocations attempted by an agentic system can be reviewed, approved, or blocked by a named operator before they execute. The gateway is payload-agnostic and does not impose an OpenAI-compatible schema on the application.
The enforcement plane and the audit plane are the surfaces a national competent authority inspects. Those are the surfaces DeepInspect produces.