|

What the EU AI Act Means for US Enterprises with European Exposure

Audio Overview

Powered by Notebook LM
Video Overview

EXECUTIVE SUMMARY

  • Jurisdiction follows the output, not the infrastructure. Under Article 2(1)(c), the EU AI Act applies to any US enterprise whose AI produces output used by EU customers, employees, or counterparties. The IAPP confirmed that this extraterritorial reach is broader than that of the GDPR.
  • Two obligations are already enforceable. Prohibited AI practices and AI literacy requirements under Article 4 both took effect in February 2025. A third, transparency disclosures under Article 50, take effect on August 2, 2026. Most US enterprises have not started Article 4 training programs. August 2026 is five months away.
  • The 40% classification problem. Credit scoring, patient triage, and employment screening are explicitly high-risk. Fraud detection and algorithmic trading are explicitly not. Forty percent of enterprise AI systems fall into neither category cleanly, and for most of them, proving a valid exception costs more than building to the higher standard.
  • Penalty exposure is three-layered. Regulatory fines reach 7% of worldwide annual turnover, calculated against the global parent under ECJ precedent. The Product Liability Directive adds strict liability for non-compliant AI. Major insurers are moving to exclude AI-related liabilities from standard policies. All three can land simultaneously.
  • Compliance is an engineering decision, not a documentation exercise. The firms that reach readiness fastest are treating it that way, adopting human oversight practices, building documentation-as-code, and designing logging architecture before deployment. Firms that defer discovery, 12 to 18 months in, whose conformity evidence describes a system that no longer exists.

How to read this article. If you need to determine whether your organization is in scope, start with “How an AI system in Virginia triggers European regulation.” If you need to classify your AI systems, go to How the EU AI Act classifies enterprise AI. If you are already past scope and classification and need to know what to build, skip to Six engineering decisions before August 2026. If your concern is financial exposure, start with When your insurer excludes AI.


EU AI Act enforcement timeline showing five compliance deadlines from February 2025 to August 2030, with the full high-risk regime arriving August 2026 and a proposed Digital Omnibus extension to December 2027
The Compression Gap: Strategic Compliance Milestones to 2030

A US regional bank with a growing European correspondent banking book uses AI credit scoring on a centralized platform hosted in Virginia. The model was trained on US data, deployed by a US engineering team, and governed under US risk frameworks. When the bank onboarded EU-based counterparties, nobody re-examined the regulatory assumptions built into the system. Under the EU AI Act, that system will be subject to the full high-risk compliance regime when Annex III obligations take effect on August 2, 2026, with penalties reaching up to 7% of the bank’s worldwide annual turnover. Prohibited AI practice provisions and the Article 4 literacy requirement are both already in force. The bank has approximately five months to design a compliant system.

Article 2(1)(c) specifies jurisdiction based on where AI output is used, not where it is developed, hosted, or meant to operate. The Act applies whenever output reaches the European Union, without any targeting requirement, data processing connection, or intent test. The IAPP confirmed this reading in its August 2025 analysis and found the extraterritorial scope broader than that of the GDPR. Most US compliance teams modeled the AI Act as the narrower obligation, and the planning that assumption produced needs to be rebuilt from the scope question outward.

Over the past few months, I have reviewed EU AI Act exposure across a few enterprise AI deployments in banking, insurance, and healthcare. In each sector, AI systems designed exclusively for domestic use reach EU customers, employees, or counterparties through normal business expansion, and the regulatory regime they trigger was never part of the original design conversation. By the time it surfaces, the architecture is already set.

Outside counsel and compliance checklists cannot keep conformity artifacts aligned with AI systems that change every few weeks. The AI Act requires version-controlled model artifacts, decision logging that records every input and output, human oversight designed into the architecture, and model lineage tracking. Getting that right at the start costs less than rebuilding it under a deadline.


How an AI system in Virginia triggers European regulation

The EU AI Act establishes jurisdiction through three pathways.

  • Article 2(1)(a) covers providers placing AI systems on the EU market, the standard market access rule that captures companies selling or licensing AI products into Europe.
  • Article 2(1)(g) extends protections to affected persons located in the EU, regardless of where the AI system or its operator is based.
  • Article 2(1)(c) catches most US enterprises, applying to any provider or deployer whose AI system produces output used in the Union, regardless of where the provider is established.

The European Commission’s Guidelines on Prohibited AI Practices, analyzed by Orrick in April 2025, confirm that “putting into service” covers both external deployment and internal use. A US insurer that runs an AI underwriting model internally and applies it to EU risk assessments, whether directly or through a reinsurance relationship, meets that test. So does a US healthcare company whose AI patient triage platform is deployed at European clinical trial sites, and a US bank that deploys credit scoring for EU counterparties.

The ECJ reinforced this in Case C-383/23 (February 13, 2025), confirming that fines are calculated on the global parent company’s turnover, not just on the turnover of EU subsidiaries. Covington’s analysis confirmed this precedent extends to the AI Act. For a company with $100 billion in annual revenue, a 7% fine exposes it to more than $7 billion in fines.

Most banks build credit scoring with traditional supervised machine learning, not generative AI. The GPAI obligation chain reaches banking AI anyway. When a regional bank integrates a foundation model from OpenAI or Anthropic into compliance automation, customer document processing, or risk narrative generation, and those outputs feed into workflows touching EU counterparties, the AI Act creates three separate obligation holders for the same system.

The foundation model provider (OpenAI, Anthropic, or similar) bears Article 53 obligations covering technical documentation, copyright compliance, and information that downstream users need.

When the bank builds an AI system on top of that foundation model, it becomes the AI system provider. It bears obligations matching its system’s risk classification, including conformity assessment and technical documentation under Annex IV.

The bank is also the deployer. As a deployer, it bears Article 26 obligations around monitoring and human oversight. For its high-risk credit scoring systems built on traditional ML, it must also complete a Fundamental Rights Impact Assessment under Article 27.

Three EU AI Act obligation holders for one AI system showing foundation model provider with Article 53 obligations, bank as system provider with Annex IV conformity assessment, and bank as deployer with Article 26 monitoring and Article 27 fundamental rights impact assessment
The Triad of Compliance: Converging AI System Roles

The boundaries between these three roles are where most teams get stuck. The contracts must specify who provides what documentation. The technical documentation has to flow from provider to deployer without gaps. Liability chains must be explicit about who is responsible when something goes wrong. Most teams discover these alignment problems during implementation, not during contract negotiation, and by then the architecture is harder to change.

Two obligations that apply now, regardless of risk classification

Most US enterprises have circled August 2, 2026, as the date the EU AI Act becomes real for them. That framing misses two obligations that arrived earlier and have been largely ignored.

Article 4 required documented AI literacy training for everyone involved in operating or overseeing AI systems since February 2, 2025. I have asked about this to many compliance practitioners over the past year. Very few teams have a program. Fewer have documentation. National enforcement will not fully activate until August 2026, but a missing literacy record is the kind of finding regulators reach for first, before any system gets scrutinized.

Article 50 does not take effect until August 2, 2026, but it reaches further than most teams have planned for. It covers every AI system in scope, regardless of risk classification. A customer-facing chatbot that does not trigger any high-risk category still has to identify itself as AI. Images, audio, video, and text generated by AI require machine-readable watermarks. Deepfakes require disclosure of their artificial origin. A draft Code of Practice sets two tiers for that disclosure standard, fully AI-generated and AI-assisted. If you run chatbots in EU markets, this applies to them, and the supporting architecture must be in place before August.

How the EU AI Act classifies enterprise AI, and where classification breaks down

The high-risk classification under the EU AI Act creates some of the clearest decision boundaries in the regulation and some of the most ambiguous, with sharp differences across banking, insurance, and healthcare.

EU AI Act high-risk classification matrix for banking, insurance, and healthcare showing which AI systems are high-risk under Annex III and which are excluded, with a callout that 40 percent of systems fall in the ambiguous middle
High-Risk Classification by Enterprise Vertical

Credit scoring is high-risk; fraud detection and algorithmic trading are not

Credit scoring and creditworthiness assessment for natural persons is explicitly high-risk under Annex III, Category 5(b). The Harvard Data Science Review confirmed in Summer 2025 that most types of automated credit scoring and credit decisioning fall under this classification. Fraud detection, by contrast, is explicitly carved out under Recital 58, which states that AI fraud detection should not be considered high-risk because it does not determine individual legal rights. Algorithmic trading is also excluded; the Oxford Law Blog confirmed this in February 2026. The EBA’s November 2025 mapping exercise found that the AI Act is complementary to existing banking regulation and contains no significant contradictions.

A bank running both credit scoring and fraud detection through the same AI infrastructure faces two different compliance regimes for systems that share data pipelines, model training infrastructure, and deployment platforms. The compliance controls cannot be a single undifferentiated layer applied to the whole infrastructure. They have to be wired in at the point where credit decisions and fraud alerts diverge, a distinction that has to be made at design time, not discovered during a compliance review.

Life and health insurance AI triggers high-risk classification; property and casualty does not

The EU AI Act’s high-risk classification in insurance is narrow. Only AI used to assess risk or set pricing for life and health insurance qualifies, under Annex III, Category 5(c). Earlier drafts cast a wider net. The 2021 Slovenian Presidency compromise had included all insurance underwriting, premium setting, and claims assessment. The final text stripped most of that out. Property and casualty insurance AI is not explicitly high-risk. That distinction is significant operationally because EIOPA data shows 50% of non-life carriers and 24% of life insurers already run AI in production, and most of those systems now sit outside the high-risk regime.

Most insurance AI falls outside the EU AI Act’s high-risk classification, and until recently, that left a governance gap. EIOPA’s August 2025 Opinion on AI Governance closes it. The Opinion does not invent new obligations. It maps existing IDD and Solvency II governance principles to insurance AI, providing carriers using AI for property and casualty pricing, claims routing, or customer segmentation with a clear framework to build against.

An MDPI study across four pan-European insurers analyzed 12.4 million observations and found that debiasing AI models was the cheapest way to manage regulatory capital. Insurers that fixed bias in their models cut their worst-case fine exposure by EUR 35 million. The math changes how you budget for it. Bias mitigation stops being a compliance expense and becomes a capital efficiency play.

Medical device AI faces dual regulation with a later compliance deadline

AI-powered medical devices follow a different path to high-risk classification. They qualify through Article 6(1) and the Medical Devices Regulation, not through Annex III directly, and their compliance deadline is August 2027, a full year after the general high-risk regime takes effect. However, patient triage is classified as high-risk under Annex III on its own terms, because a bad output directly affects whether a patient gets treated.

Drug discovery AI sits at the other end. The EMA noted in September 2024 that when an AI model in a drug discovery pipeline produces a bad prediction, the pharmaceutical company absorbs the cost through a failed experiment or a wasted research cycle. No patient is affected. No individual’s rights are determined. Hence, EMA considers it to have a low regulatory impact.

Healthcare companies face dual regulation for AI medical devices (the AI Act and the Medical Devices Regulation running in parallel), but the MDCG 2025-6 guidance from June 2025 allows a single conformity assessment and one integrated quality management system. That roughly halves the practical burden.

The EMA-FDA joint AI principles, released in January 2026, push this further. Commissioner Olivér Várhelyi described them as a first step of renewed EU-US cooperation in novel medical technologies. For US healthcare companies with EU exposure, converging standards across both markets opens a realistic path to a single compliance architecture serving both jurisdictions.

Forty percent of enterprise AI systems resist clean EU AI Act classification

The appliedAI Institute’s study of 106 enterprise AI systems found 18% were clearly high-risk, 42% were clearly low-risk, and 40% were in the ambiguous middle. Article 6(3) pulls more of those ambiguous systems into the high-risk category than most teams expect. Any AI system that profiles natural persons is automatically high-risk, no exceptions. If your system profiles customers for cross-selling, risk stratification, or behavioral segmentation, the profiling override applies regardless of what domain classification would otherwise suggest.

Many enterprises end up building more systems to high-risk standards than the regulation strictly requires, because documenting a valid exception demands ongoing justification every time the system changes. I had seen cases where the team spent four months building a defensible exception case for a system they could have made compliant in six weeks. For the 40% of systems in the ambiguity zone, the cheaper path is usually to build to the higher standard and eliminate the classification question altogether.

US state AI laws and the EU AI Act converge on the same compliance calendar

Over 1,000 AI-related bills were introduced across all 50 states in 2025, and the NCSL reported 38 states adopted approximately 100 measures. The result is a patchwork that landed on a compliance calendar almost identical to the EU AI Act’s, and in some areas goes further.

California’s SB 53 took effect January 1, 2026, requiring frontier model developers to publish risk frameworks and report safety incidents within 15 days. Texas’ HB 149 went live on the same date, prohibiting AI for restricted purposes and creating affirmative defenses for NIST AI RMF compliance. Illinois’ HB 3773 is already enforceable and goes the furthest of any US framework: it makes AI-based employment discrimination a civil rights violation under the Illinois Human Rights Act, with individual enforcement remedies through the IDHR. Colorado’s AI Act takes effect June 30, 2026, with deployer obligations that, in some covered categories, exceed the EU Act’s, including annual impact assessments and consumer appeal rights. Five weeks after Colorado, the EU’s full high-risk regime takes effect on August 2, 2026.

2026 regulatory compliance calendar showing Illinois already live, California and Texas taking effect January 2026, Colorado arriving June 2026, and EU AI Act full high-risk regime on August 2 2026
The 2026 Regulatory Collision Strip

That five-month window between now and August is where the regulatory picture gets complicated. The Trump administration’s Executive Order 14365 directly conflicts with several of these state frameworks, characterizing some bias mitigation as compelling deceptive outputs. It created a DOJ AI Litigation Task Force to challenge state laws. Baker Botts concluded in January 2026 that the EO cannot overturn existing state law without an act of Congress or a court ruling. Senator Cruz’s proposed 10-year moratorium on state AI laws was defeated by a nearly unanimous vote in the Senate. The state obligations stand, and they run in parallel with the EU’s.

The only strategy that survives this overlap is what Jones Walker identified in early 2026: build compliance to the strictest jurisdiction and adapt downward. Both Colorado and the EU reference ISO 42001 as a governance foundation, and Colorado creates a rebuttable presumption of compliance for NIST AI RMF adopters, which means the same documentation serves both jurisdictions. Baker Botts noted in September 2025 that adopting these frameworks enterprise-wide satisfies state law and evidences merit-based compliance for federal regulators. ZwillGen adds a practical note on the EO conflict: frame bias mitigation as measurement, calibration, and governance rather than manufacturing falsehoods. That framing satisfies EU bias requirements without triggering the conflict the EO creates, and enterprises that document the distinction clearly, with audit trails showing what their bias detection measures do and why, can satisfy both regimes simultaneously.

Why EU AI Act compliance built after deployment fails

Most enterprises follow the same sequence. They hire outside counsel, draft policies, file conformity assessment templates on schedule, and consider the compliance question answered. The problem surfaces 12 to 18 months later, when engineers modify the model and no one updates the conformity documentation. By that point, the documentation describes a system that no longer exists, the audit evidence is wrong, and the team that wrote the original policies has moved on.

The scope of what compliance actually requires is wider than most teams realize. Articles 8 through 15 impose seven interlocking obligations on high-risk systems that must work together. Article 9 mandates a continuous risk management system covering intended use and reasonably foreseeable misuse, with specific attention to effects on persons under 18 and vulnerable groups. Article 10 requires data governance covering collection provenance, bias detection, and gap identification, and permits processing sensitive personal data strictly for bias detection under GDPR safeguards.

The transparency and robustness requirements add further depth. Article 13 requires that deployers can meaningfully interpret system output, including per-subgroup accuracy metrics. Article 15 requires accuracy, robustness, and cybersecurity protections against AI-specific threats, such as data poisoning, adversarial examples, and attacks on model confidentiality. Articles 11 and 12 handle documentation and logging. Article 14 covers human oversight. These seven obligations are designed as an integrated system. Satisfying them individually while ignoring how they interact is how teams end up with conformity evidence that looks complete but falls apart under audit.

Documentation drift is a compliance failure, not an administrative one

Systima.ai’s February 2026 framework identified the core problem. When legal documentation is not derived from the actual system state, it drifts. In many AI systems, especially in financial services, the documentation and system specifications diverged within 12 months of deployment. In many of those, the divergence will be material enough to invalidate compliance conclusions. Means the documentation accurately described a version of the system that no longer exists.

This is the same failure pattern I describe as Version Drift in my broader work on enterprise AI architecture. A model gets updated, its feature engineering changes, or a new data source is connected, but the conformity assessment still reflects the original configuration. In compliance, Version Drift produces assessments that appear valid until someone checks the version date. Nobody catches the gap until an audit or an incident forces a reconciliation.

The architectural fix lives in the six decisions below

The fix is architectural, not procedural, and it applies to more than documentation. Logging, human oversight, and governance infrastructure all face the same version alignment problem. The six engineering decisions section lays out the specific approach for each.

The emerging concept of an AI Bill of Materials, analogous to software SBOMs, adds a useful layer. It provides a structured inventory of all AI system components mapped to Annex IV’s documentation requirements. Organizations adopting it now find that it simplifies both the initial conformity assessment and the ongoing maintenance that follows every system change.

Conformity assessment is less burdensome than most teams expect

Under Article 43, providers of most Annex III high-risk systems (categories 2 through 8) can follow internal self-assessment if they have applied harmonized standards in full. Third-party assessment is required only for remote biometric identification systems or when harmonized standards have not been applied, and no common specifications exist. As of March 2026, CEN/CENELEC harmonized standards are not yet published, which means providers cannot claim presumption of conformity through standards alone and must demonstrate compliance through detailed alternative documentation until those standards arrive.

Banks and insurers carry less of a gap than most other industries because Article 19(2) allows them to satisfy the AI Act’s quality management system requirements through their existing sectoral QMS, and Article 26(5) lets deployers fulfill monitoring obligations through internal governance already required under financial services law. Much of what CRD/CRR and Solvency II already demand overlaps directly with the AI Act’s high-risk requirements. I have seen financial services teams begin a gap analysis, expecting to build from scratch, only to discover that 40 to 60 percent of the compliance infrastructure was already in place under existing regulations.

Logging across three regulatory frameworks is a single design problem

Article 12 requires the automatic recording of every decision made by a high-risk system. That means the exact input received, the model version that processed it, the raw output, any post-processing applied, the final output delivered, and any human intervention that occurred. For agentic systems, the requirement extends to each tool call, intermediate reasoning step, and routing decision.

The logging requirement alone is substantial. The problem is that it collides with GDPR’s deletion requirements and SOC 2/SOX retention windows. These three frameworks apply to the same data in the same system, and the architecture that satisfies all three has to be designed from the start. The specific approach, including the crypto-shredding pattern most enterprises are converging on, is covered in the logging decision below.

Human oversight under the EU AI Act: three patterns with different costs

Article 14 requires appropriate human oversight before high-risk AI decisions take effect. Three implementation patterns satisfy this requirement. The choice has to be made at design time, because retrofitting oversight onto a system built for autonomous operation breaks both the user experience and the decision throughput that justified the business case.

Human oversight spectrum under EU AI Act Article 14 showing three patterns from high control to high autonomy: HITL for loan approvals at 30 to 40 percent latency, HOTL for fraud detection at 5 to 10 percent latency, and HOVL for underwriting with minimal latency
The Human-AI Oversight Spectrum: Latency vs. Control

Human-in-the-loop (HITL) requires a human reviewer to approve every decision before it executes. Loan approvals are the clearest use case. The cost of wrongly declining a creditworthy borrower or approving a bad loan is high enough that adding latency to the process is a reasonable price. In a typical credit decision workflow, HITL can add up to 30-40% latency. What the reviewer adds in return is judgment the model cannot provide. A loan officer with ten years of relationship context with a client sees things an algorithm’s risk score will miss, especially in borderline cases where model confidence is low. The pattern fits when the volume of decisions is low enough that a human can review each one without becoming a bottleneck.

Human-on-the-loop (HOTL) allows autonomous operation with real-time monitoring and intervention capability. It fits high-volume contexts like fraud detection, where reviewing every transaction is impossible but catching anomalies quickly matters. Latency impact runs 5-10%, mostly from monitoring infrastructure overhead. The risk is automation bias. Melanie Fink’s SSRN paper from February 2025 warned that pro forma rubber-stamping of AI decisions does not satisfy Article 14. Monitoring dashboards have to require genuine engagement. If a reviewer can approve a decision with a single click with minimal or no friction, the oversight is not good enough and the system is not compliant.

Human-over-the-loop (HOVL) has humans define decision boundaries while AI operates autonomously within them, with periodic aggregate review. This fits routine insurance underwriting within established guidelines, where humans set and update the parameters rather than reviewing individual cases. Latency impact is minimal. The system design is more complex because boundary definitions need to be precise enough to be auditable under Article 14.

HOVL is the pattern that most teams I work with eventually settle on for their highest-volume processes. The AI operates within human-defined parameters, exceptions route to manual review, and most decisions flow at speed. That is Decision Velocity in practice under a compliance constraint. Throughput stays high because the human contribution is concentrated where it matters, in boundary-setting and exception handling, rather than spread across every transaction.

The regional bank, as discussed in the first part of this article, illustrates why the oversight architecture must be granular. The credit scoring system needs HITL or HOVL for EU counterparty decisions because those are high-risk. The fraud detection system on the same infrastructure can run HOTL because fraud detection is not high-risk, though it is still subject to Article 50 transparency. Both oversight patterns must operate simultaneously on shared infrastructure. That is an engineering decision, not a policy decision, and it has to be made before the system is built.

Agentic AI systems and the EU AI Act’s stop button requirement

Article 14(4)(e) requires that every autonomous agent in a high-risk context support immediate interruption. Multi-step agents must log each reasoning step in a way that allows full reconstruction. Enterprises deploying agentic AI for compliance automation, document processing, or customer-facing workflows are already in scope, and most of the agent architectures I see in production were not designed with these constraints in mind.

Agent orchestration in enterprise settings requires clear responsibility boundaries between provider and deployer obligations. Article 25’s substantial modification threshold, which the Arnold and Porter August 2025 analysis placed at more than one-third of the original training compute, determines when fine-tuning or orchestrating multiple GPAI models triggers full provider obligations. That threshold matters because an enterprise that fine-tunes a foundation model from Organizations such as OpenAI or Meta may cross from deployer to provider without realizing it. Organizations designing agent architectures need those boundaries codified in contracts and technical documentation before deployment.

For multi-agent systems, every tool call, routing decision, and intermediate reasoning step must be logged in a way that is attributable to a specific agent and recoverable for audit. Every routing decision must trace back to a defined decision boundary. The stop-button requirement adds a harder constraint. The interrupt path has to be tested under realistic conditions, meaning an operator must be able to halt a multi-step agent mid-execution and reconstruct exactly where it was, what it had done, and what it was about to do. In practice, that forces architectural choices that most orchestration layers do not make by default. State must be externalized so it survives an interruption. Agent coordination must be designed so that stopping one agent does not corrupt the work of others. These are foundational design decisions that have to be made before the first agent is deployed.

The EU AI Act follows the GDPR trajectory, but compliance requires engineering, not process

Crowd Research Partners found 60% of organizations were not compliant with GDPR’s May 2018 deadline. EY and Bloomberg estimated Fortune 500 companies spent $7.8 billion combined on GDPR compliance, with individual large enterprises spending between $15 million and $70 million each. GDPR has generated EUR 5.88 billion in total fines since enforcement began. The organizations that built compliance infrastructure before the deadline built it once and maintained it. Those who waited paid the same cost plus a remediation premium under enforcement pressure.

The EU AI Act is following the same pattern. The EU AI Office has had jurisdiction over GPAI models since August 2025, with the power to request documentation, conduct model evaluations, and request access to source code. Finland became the first member state with full AI Act enforcement powers in December 2025. Germany is designating the Bundesnetzagentur. Dozens of major AI providers, including OpenAI, Google, Microsoft, Amazon, and Anthropic, signed the GPAI Code of Practice in August 2025. Meta declined and faces enhanced scrutiny. No formal penalties have been imposed yet. The enforcement pattern from GDPR, quiet initial years followed by nine-figure fines, should be treated as a planning assumption.

GDPR was primarily a process and documentation challenge. Organizations got around this by changing how they collected data, updating privacy notices, and building consent management. Eventually, enterprises caught up late, though painful but possible. However, AI Act compliance requires engineering changes to production AI systems. Human oversight has to be designed into system architecture. Logging infrastructure has to be built. Explainability features have to be implemented. Models may need to be retrained. An organization that deferred GDPR compliance could close the gap through process changes over a few intense quarters. An organization that defers AI Act compliance faces a fundamentally harder problem, because retrofitting engineering changes onto production systems takes longer, costs more, and carries a higher risk of breaking the systems that generate revenue.

The readiness data suggest most organizations are not on track. OdiseIA’s Richard Benjamins, writing in MIT Sloan Management Review, stated that two years is the minimum preparation time. An MIT Sloan and BCG expert panel found that only 20% of responsible AI experts believe organizations will be ready for phased-in requirements. Morrison Foerster found businesses consistently need at least 12 months to comply with even a single standard. Cost estimates for large enterprises with high-risk AI systems converge around $8 to $15 million, with mid-size companies at $2 to $5 million, according to McKinsey-aligned analyses. The European Commission’s CEPS study estimated total annual governance cost per AI model at approximately EUR 52,227. These numbers are manageable when planned for. They become significantly worse when compressed into a reactive timeline under enforcement pressure.

The deeper problem is the gap between claiming governance and operating it. SecurePrivacy’s 2025 assessment found that only 18% of large enterprises have fully implemented AI governance frameworks, despite 90% using AI daily. Deloitte’s Q4 2024 survey found 87% of executives say they have AI governance frameworks, but fewer than 25% have operationalized them. Regulators do not ask whether you have a framework document. They ask for evidence that the framework runs, that decisions are logged, that oversight is genuine, and that documentation reflects the current system. The 62% of enterprises sit between “we have a framework” and “it actually works,” which is where enforcement will land hardest.

AI governance readiness gap showing 87 percent of executives claim governance frameworks above the waterline but a 62 percent governance gap exists below, with only 25 percent having operationalized their frameworks, based on Deloitte and SecurePrivacy 2025 data
The Execution Iceberg: Visible Policy vs. Hidden Gap

When your insurer excludes AI, and the EU AI Act creates strict liability

The Financial Times reported in November 2025 that AIG, Great American, and WR Berkley are seeking regulatory approval to exclude AI-related liabilities from corporate policies. WR Berkley filed a proposed exclusion covering any actual or alleged use of AI, though insurance experts question whether language that broad will survive regulatory review. QBE is moving in the opposite direction, introducing the first major endorsement explicitly covering limited fines under the EU AI Act. Coalition now offers deepfake incident coverage under cybersecurity policies. The AI insurance market is projected to reach approximately $4.7 billion in premiums by 2032.

Silent AI coverage is ending

Most enterprises today have AI risk covered by accident. Their traditional corporate policies were written before AI was a category, so AI-related claims fall under general liability, product liability, or professional indemnity. Insurers call this silent AI coverage. WTW’s Dr. Anat Lior described the state of the market in December 2025 and predicted that silent coverage is closing. Governance frameworks will become prerequisites for coverage, the same way cybersecurity programs became prerequisites for cyber insurance a decade ago. No governance framework, no coverage. Gallagher warned organizations to re-evaluate their entire insurance portfolio, because AI exposure does not sit neatly in one policy. It flows into employment practices liability, product liability, medical malpractice, and directors and officers coverage.

The Product Liability Directive adds strict liability for non-compliant AI

The Product Liability Directive (EU) 2024/2853, with a member state transposition deadline of December 9, 2026, explicitly includes software and AI as products subject to strict liability. If an AI system does not comply with the AI Act, that non-compliance is treated as evidence of a product defect. The person harmed does not need to prove the company was negligent. The defect is presumed.

For the regional bank from the opening, this creates a three-layer exposure. A non-compliant credit scoring system triggers regulatory penalties up to 7% of worldwide turnover. The same system creates product liability exposure based on that presumed defect. And if the insurer has excluded AI-related liabilities, there is no coverage to absorb either hit. All three can land simultaneously.

Three concurrent liability layers for non-compliant AI systems: regulatory fines up to 7 percent of worldwide turnover, strict product liability under the Product Liability Directive with presumed defect, and insurance coverage exclusion by major insurers including AIG and WR Berkley
The Triple Threat: Concurrent AI Liability Exposure

The AI Liability Directive was withdrawn in October 2025 after member states could not reach consensus, leaving no EU-level harmonization of AI fault liability. Damage claims now revert to national tort law in each member state. MEP Axel Voss criticized the withdrawal as driven by pressure from industry lobbyists. The withdrawal leaves a gap for cross-border fault claims, but the Product Liability Directive already covers the scenario that matters most. If your AI system is non-compliant and it causes harm in the EU, strict liability applies. The withdrawn directive would have made fault-based claims easier too, but the strict liability path exists without it.

Six engineering decisions before the EU AI Act’s August 2026 deadline

The decisions that determine whether your AI systems are compliant by August 2026 are engineering decisions, not legal ones. Here are the six I see teams getting wrong most often.

1. Map exposure before classifying risk

Before you classify anything, answer one question. Which of your AI systems produce output that reaches EU persons? Build a system inventory mapped to output destinations, not deployment locations. A system hosted in New Jersey that scores EU counterparties is in scope. A system hosted in Frankfurt that scores only US customers may not be. Get this mapping wrong and every classification decision that follows is applied to the wrong set of systems.

2. Apply the profiling override before any domain-specific analysis

Article 6(3) is the rule most teams overlook. If your AI system profiles people for any purpose, it is high-risk. Cross-selling, risk stratification, behavioral segmentation, it does not matter. Profiling natural persons automatically triggers high-risk classification, regardless of the system’s primary function. Apply this check first, before any other classification work. It clears up most of the ambiguity for systems in the middle zone, and in many cases building to high-risk standards from the start is cheaper than maintaining an exception case you have to re-justify every time the system changes.

3. Choose the human oversight pattern before building the system

Human-in-the-loop, human-on-the-loop, and human-over-the-loop cannot be retrofitted cleanly. The oversight pattern determines your latency architecture, your monitoring infrastructure, and how reviewers engage with AI outputs. That is why it has to be chosen before the system is built.

Match the pattern to the decision. HITL for irreversible high-stakes decisions like loan approvals, where every output gets human review. HOTL for high-volume contexts like fraud detection, where the system runs autonomously but reviewers monitor in real time and intervene on anomalies. HOVL for parameter-bounded operations like routine underwriting, where humans set the boundaries and the AI operates within them. Document why you chose the pattern you chose. Auditors will ask, and Article 14 compliance depends on showing that oversight is genuine.

4. Version-control compliance artifacts alongside model code

Conformity documentation in a Word file or SharePoint folder will diverge from system reality within twelve months. Engineering teams and compliance teams move on different cycles, and neither team’s workflow triggers an update to the other’s artifacts. The fix is to treat compliance documentation the same way you treat code. Version-control it. Generate it from deployment specifications. Update it through the same review process as code changes. When an engineer modifies a model, the compliance artifact should reflect that change automatically. For the 10-year retention requirement under Article 19, manual document maintenance at scale is prohibitively expensive. Generating documentation from actual system state is how you prevent Version Drift, where a compliant system gradually becomes non-compliant because the documentation stopped keeping up.

5. Design logging for three regulatory frameworks from the start

Article 12 requires comprehensive decision logging. GDPR requires deletion when purpose ends. SOC 2 and SOX carry their own retention windows. These three requirements conflict. The logging architecture has to satisfy all three from the start, using crypto-shredding to handle GDPR erasure while preserving log structure for AI Act and financial regulation compliance. I have seen teams build logging for one framework and then spend more retrofitting the other two than they would have spent designing for all three at once.

6. Build one governance foundation mapped to every jurisdiction

Start with NIST AI RMF as the jurisdiction-agnostic risk management foundation. Layer ISO 42001 on top as the auditable management system. Then build jurisdiction-specific mappings for EU Article 14, Colorado’s annual impact assessments, and Illinois’ discrimination standards. Yes, the upfront cost is higher than building for one jurisdiction. But every new requirement maps onto existing infrastructure rather than triggering a new build. Over three years, the total cost is substantially lower. Colorado establishes a rebuttable presumption of compliance for NIST AI RMF adopters, and both Colorado and the EU reference ISO 42001, meaning the same documentation serves multiple jurisdictions without duplication.

What the EU AI Act enforcement timeline means for the next 18 months

The enforcement timeline runs across five deadlines over five years. Prohibited AI practices and AI literacy took effect in February 2025. GPAI model obligations took effect in August 2025. The full high-risk regime arrives on August 2, 2026. Medical device AI follows August 2027. Legacy public-authority AI systems must comply by August 2030.

The Digital Omnibus proposal, still in ordinary legislative procedure as of March 2026, may push the deadline for high-risk systems to December 2027. PwC, IAPP, and multiple law firms advise building as if August 2026 holds. The harmonized standards that would simplify compliance are not ready yet. CEN/CENELEC standards are not expected until Q4 2026, and only prEN 18286, the first harmonized standard specifically built for the AI Act’s Article 17, has completed public enquiry. ISO 42001 certification provides a governance foundation but, on its own, does not create a presumption of conformity under the AI Act. Organizations that align with both ISO 42001 and prEN 18286 will be first in line to claim conformity when those standards are formally recognized.

The tooling to manage this at scale already exists. The AI governance platform market reached $309 million in 2025 and is projected to reach $4.8 billion by 2034 (Mordor Intelligence, 2025). Credo AI, named a Forrester Wave Leader in Q3 2025, reports enterprise customers achieving 10x speed improvements over manual compliance processes. ServiceNow launched an AI Control Tower at Knowledge 2025. Most enterprises have not yet integrated these platforms into their AI architecture, and those that wait until August will do so under enforcement pressure.

The regional bank, from the opening, illustrates what the full architecture looks like when it comes together. Documentation-as-code to prevent compliance drift. HOVL human oversight to maintain decision speed while satisfying Article 14. Crypto-shredded logging to satisfy Article 12 and GDPR simultaneously. A unified governance framework mapped to both EU high-risk requirements and US state obligations. None of these is a compliance overlay. They are engineering decisions that produce compliance evidence as a byproduct of building AI systems well.

At the operational level, the EU AI Act requires version control, decision logging, auditable oversight, and documentation that remains aligned with the systems it describes. Rigorous enterprise AI engineering has always demanded the same things. The regulation made them legally enforceable, with a penalty structure that makes getting them wrong far more expensive than getting them right.

Where to start this week

If you have not done the exposure mapping, start there. Which of your AI systems produce output that reaches EU persons? That single question determines everything that follows. If you have the mapping but have not applied the Article 6(3) profiling override, do that next. It will reclassify more systems than you expect. If you have passed both of those, the next highest-value action is to choose human oversight patterns for your high-risk systems before your engineering teams build further without them.

Five months is not a lot of time. It is enough if the decisions are engineering decisions made now, rather than legal decisions deferred until enforcement.

For deeper treatment of the architectural patterns referenced in this article, see Version Drift: The Hidden Compliance Time Bomb, Decision Velocity: The New North Star Metric, and The Architecture Gap: Why Enterprise AI Governance Fails.

Frequently Asked Questions

Does the EU AI Act apply to US companies?

Yes. Under Article 2(1)(c), the EU AI Act applies to any company whose AI system produces output used in the European Union, regardless of where the company is based, where the system is hosted, or where it was built. There is no targeting requirement, no data processing connection, and no intent test. If your AI output reaches an EU customer, employee, or counterparty, you are in scope. The IAPP confirmed in August 2025 that this extraterritorial reach is broader than GDPR.

My AI system is hosted in the US and governed under US frameworks. Am I still in scope?

Yes. Jurisdiction follows the output, not the infrastructure. A credit scoring system hosted in Virginia that scores EU counterparties is in scope. A system hosted in Frankfurt that scores only US customers may not be. The question is where the output is used, not where the system sits.

Does internal use of AI trigger the EU AI Act, or only external deployment?

Both. The European Commission’s Guidelines on Prohibited AI Practices confirm that “putting into service” covers internal development and deployment. A US insurer that runs an AI underwriting model internally and applies it to EU risk assessments meets the test, even if the model was never intended for external use.

Are fines calculated against EU subsidiary revenue or global parent revenue?

Global parent revenue. The ECJ confirmed in Case C-383/23 (February 2025) that fines are calculated against the global parent company’s worldwide annual turnover. Covington’s analysis confirmed this precedent extends to the AI Act. For a company with $100 billion in annual revenue, a 7% fine creates more than $7 billion in exposure.

Is AI credit scoring high-risk under the EU AI Act?

Yes. Credit scoring and creditworthiness assessment for natural persons is explicitly high-risk under Annex III, Category 5(b). The Harvard Data Science Review confirmed in Summer 2025 that most types of automated credit scoring and credit decisioning fall under this classification. The full high-risk compliance regime takes effect on August 2, 2026.

Is AI fraud detection high-risk under the EU AI Act?

No. Fraud detection is explicitly carved out. Recital 58 states that AI fraud detection should not be considered high-risk because it does not determine individual legal rights. Algorithmic trading is also excluded. A bank running both credit scoring and fraud detection through the same AI infrastructure faces two different compliance regimes for systems that share the same data pipelines.

Is insurance AI high-risk under the EU AI Act?

Only AI used to assess risk or set pricing for life and health insurance for natural persons qualifies as high-risk, under Annex III, Category 5(c). Property and casualty insurance AI is not explicitly high-risk. The final text narrowed the scope significantly from earlier drafts that had included all insurance underwriting, premium setting, and claims assessment.

What is the Article 6(3) profiling override?

Article 6(3) states that any AI system performing profiling of natural persons is automatically high-risk, regardless of its primary function. If your system profiles customers for cross-selling, risk stratification, or behavioral segmentation, it is high-risk. This override catches systems that most teams assume are low-risk and reclassifies a significant portion of the 40% of enterprise AI systems that sit in the ambiguous middle zone.

What EU AI Act obligations are already in force?

Two obligations are already enforceable as of February 2, 2025. First, prohibited AI practices (such as social scoring and manipulative AI) are banned. Second, Article 4 requires documented AI literacy training for everyone involved in operating or overseeing AI systems. A third obligation, transparency disclosures under Article 50 covering chatbot identification, AI-generated content watermarking, and deepfake disclosure, takes effect August 2, 2026.

What is the difference between HITL, HOTL, and HOVL?

These are three human oversight patterns under Article 14. HITL (human-in-the-loop) requires human approval before every decision, adding 30-40% latency, and fits irreversible high-stakes decisions like loan approvals. HOTL (human-on-the-loop) allows autonomous operation with real-time monitoring and intervention, adding 5-10% latency, and fits high-volume contexts like fraud detection. HOVL (human-over-the-loop) has humans define decision boundaries while AI operates within them, with minimal latency, and fits routine operations like insurance underwriting. The pattern must be chosen before the system is built because it determines latency architecture, monitoring infrastructure, and the user experience.

What is documentation-as-code and why does the EU AI Act require it?

Documentation-as-code means version-controlling compliance artifacts alongside model code and generating them from deployment specifications rather than maintaining them as separate documents. The EU AI Act does not use this term, but Articles 11 and 12 require technical documentation and logging that stays aligned with the actual system throughout its lifecycle. In practice, conformity documentation maintained in Word files or SharePoint folders diverges from system reality within twelve months. Documentation-as-code prevents this drift by updating compliance artifacts through the same merge-request process that gates code changes.

What is crypto-shredding and how does it solve the GDPR and AI Act logging conflict?

Article 12 of the AI Act requires comprehensive decision logging. GDPR requires the deletion of personal data when the processing purpose ends. SOC 2 and SOX require retention for their own audit windows. These three requirements conflict. Crypto-shredding resolves this by encrypting personal data in logs with separate keys. When GDPR requires erasure, you destroy the key. The log structure stays intact for AI Act and SOX compliance, but the personal data becomes irretrievable. The EU Data Protection Board has endorsed the approach, though the ECJ has not ruled definitively on it.

Does ISO 42001 certification give presumption of conformity under the EU AI Act?

No. ISO 42001 provides a governance foundation and serves as a useful management system framework, but the EU AI Office indicated in May 2024 that it is not fully aligned with the final AI Act text and is not part of the EU harmonization process. It will not provide a presumption of conformity. The first harmonized standard specifically built for the AI Act is prEN 18286, which targets Article 17’s quality management system requirements. CEN/CENELEC harmonized standards are expected in Q4 2026.

What are the penalty tiers under the EU AI Act?

The highest tier covers prohibited AI practices, with fines up to EUR 35 million or 7% of worldwide annual turnover, whichever is higher. This tier has been enforceable since February 2025. The second tier covers provider and deployer obligation violations, with fines up to EUR 15 million or 3% of turnover. Fines are calculated against the global parent company’s worldwide turnover under ECJ precedent from Case C-383/23.

Can my insurance cover EU AI Act fines?

It depends on your policy and whether your insurer has excluded AI-related liabilities. The Financial Times reported in November 2025 that AIG, Great American, and WR Berkley are seeking regulatory approval to exclude AI-related liabilities from corporate policies. On the other side, QBE introduced the first major endorsement explicitly covering limited fines under the EU AI Act. WTW predicts that governance frameworks will become prerequisites for insurance coverage, following the trajectory cyber insurance took a decade ago. Enterprises without a documented AI governance framework may find themselves unable to obtain coverage at any price.

Based on the research notes, here is the list with only references that have verifiable URLs. Sources without links have been omitted.

Sources and References

  1. Regulation (EU) 2024/1689 (EU AI Act), Official Journal of the European Union. https://artificialintelligenceact.eu/
  2. Article 6: Classification Rules for High-Risk AI Systems. https://artificialintelligenceact.eu/article/6/
  3. Article 11: Technical Documentation. https://artificialintelligenceact.eu/article/11/
  4. Article 12: Record-Keeping. https://artificialintelligenceact.eu/article/12/
  5. Article 14: Human Oversight. https://artificialintelligenceact.eu/article/14/
  6. Article 25: Responsibilities Along the AI Value Chain. https://artificialintelligenceact.eu/article/25/
  7. Annex III: High-Risk AI Systems Referred to in Article 6(2). https://artificialintelligenceact.eu/annex/3/
  8. High-Level Summary of the AI Act. https://artificialintelligenceact.eu/high-level-summary/
  9. European Commission, Digital Omnibus on AI Regulation Proposal, November 2025. https://digital-strategy.ec.europa.eu/en/library/digital-omnibus-ai-regulation-proposal
  10. European Commission, Simpler EU Digital Rules and New Digital Wallets to Save Billions for Businesses. https://ec.europa.eu/commission/presscorner/detail/en/ip_25_2718
  11. Standardisation of the AI Act, Shaping Europe’s Digital Future. https://digital-strategy.ec.europa.eu/en/policies/ai-act-standardisation
  12. Responsibilities of the European Commission (AI Office). https://artificialintelligenceact.eu/responsibilities-of-european-commission-ai-office/
  13. CJEU Case C-383/23, GDPR Fines to Be Determined by Reference to Global Turnover of Corporate Group, February 2025. https://www.globalprivacyblog.com/2025/02/gdpr-fines-to-be-determined-by-reference-to-global-turnover-of-corporate-group/
  14. Arnold & Porter, “Does Your Company Have EU AI Act Compliance Obligations as a General-Purpose AI Model Provider?,” August 2025. https://www.arnoldporter.com/en/perspectives/advisories/2025/08/does-your-company-have-eu-ai-act-compliance-obligations
  15. Arnold & Porter, “EU Digital Omnibus: What the Proposed Reforms Mean for Pharma and MedTech,” February 2026. https://www.arnoldporter.com/en/perspectives/advisories/2026/02/eu-digital-omnibus-what-the-proposed-reforms-mean-for-pharma-and-medtech
  16. European Banking Authority (EBA), “AI Act Implications for the EU Banking and Payments Sector,” November 2025. https://www.eba.europa.eu/sites/default/files/2025-11/d8b999ce-a1d9-4964-9606-971bbc2aaf89/AI%20Act%20implications%20for%20the%20EU%20banking%20sector.pdf
  17. appliedAI Institute, Navigating the EU AI Act (106 Enterprise AI Systems Study). https://www.appliedai.de/uploads/files/Whitepapers/Navigating-the-EU-AI-Act-WB.pdf
  18. Morrison Foerster, “EU Digital Omnibus on AI: What Is in It and What Is Not?,” December 2025. https://www.mofo.com/resources/insights/251201-eu-digital-omnibus
  19. CMS LawNow, “The First Draft AI Act Standard for Public Consultation: What prEN 18286 Signals,” December 2025. https://cms-lawnow.com/en/ealerts/2025/12/the-first-draft-ai-act-standard-for-public-consultation-what-pren-18286-quality-management-system-for-eu-ai-act-regulatory-purposes-signals-for
  20. Modulos Blog, “Your ISO 42001 Certification Won’t Make Your AI System Compliant,” 2025. https://modulos.ai/blog/-your-iso-42001-certification-won-t-make-your-ai-system-compliant/
  21. DLA Piper, “Latest Wave of Obligations under the EU AI Act Take Effect,” August 2025. https://www.dlapiper.com/insights/publications/2025/08/latest-wave-of-obligations-under-the-eu-ai-act-take-effect
  22. White & Case, “AI Watch: Global Regulatory Tracker, European Union.” https://www.whitecase.com/insight-our-thinking/ai-watch-global-regulatory-tracker-european-union
  23. White & Case, “Long Awaited EU AI Act Becomes Law After Publication in the EU’s Official Journal.” https://www.whitecase.com/insight-alert/long-awaited-eu-ai-act-becomes-law-after-publication-eus-official-journal
  24. Covington, “The Impact of the New Product Liability Directive on Insurance Coverage,” January 2026. https://www.cov.com/en/news-and-insights/insights/2026/01/the-impact-of-the-new-product-liability-directive-on-insurance-coverage
  25. Bird & Bird, “AI Liability in Light of the New 2024 PLD: Expanded Liability, Challenging Defences,” 2026. https://www.twobirds.com/en/insights/2026/france/ai-liability-in-light-of-the-new-2024-pld-expanded-liability-challenging-defences-and-new-evidentiar
  26. Taylor Wessing, “Fines under the AI Act: A Bottomless Pit?” https://www.taylorwessing.com/en/interface/2021/ai-act/fines-under-the-ai-act—a-bottomless-pit
  27. Holistic AI, “Penalties of the EU AI Act: The High Cost of Non-Compliance.” https://www.holisticai.com/blog/penalties-of-the-eu-ai-act
  28. Akin Gump, “Colorado Postpones Implementation of Colorado AI Act, SB 24-205.” https://www.akingump.com/en/insights/ai-law-and-regulation-tracker/colorado-postpones-implementation-of-colorado-ai-act-sb-24-205
  29. King & Spalding, “New State AI Laws are Effective on January 1, 2026, But a New Executive Order Signals Disruption.” https://www.kslaw.com/news-and-insights/new-state-ai-laws-are-effective-on-january-1-2026-but-a-new-executive-order-signals-disruption
  30. The White House, Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence,” December 2025. https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obstruction-of-national-artificial-intelligence-policy/
  31. Deloitte US, “Unpacking the EU AI Act: The Future of AI Governance.” https://www.deloitte.com/us/en/services/consulting/articles/eu-ai-act-ai-governance.html
  32. Legal Nodes, “EU AI Act 2026 Updates: Compliance Requirements and Business Risks.” https://www.legalnodes.com/article/eu-ai-act-2026-updates-compliance-requirements-and-business-risks
  33. HÄRTING Rechtsanwälte, “Provider or Deployer? Decoding the Key Roles in the AI Act.” https://haerting.de/en/insights/provider-or-deployer-decoding-the-key-roles-in-the-ai-act/
  34. Mayer Brown, “Legal Grounds for Challenging the Overreach of European Regulations on US-Based Companies,” November 2025. https://www.mayerbrown.com/en/insights/publications/2025/11/legal-grounds-for-challenging-the-overreach-of-european-regulations-on-us-based-companies
  35. MDPI, Research Study on AI Debiasing and Regulatory Capital Efficiency Across Four Pan-European Insurers, 2025. https://www.mdpi.com/2079-9292/14/24/4881
  36. EDPS, “TechDispatch #2/2025: Human Oversight of Automated Decision-Making.” https://www.edps.europa.eu/data-protection/our-work/publications/techdispatch/2025-09-23-techdispatch-22025-human-oversight-automated-making_en
  37. Credo AI, “The Six Levels of AI Maturity: Where Does Your Organization Rank?” https://www.credo.ai/blog/the-six-levels-of-ai-maturity-where-does-your-organization-rank
  38. New Generative AI Insurance Exclusions: What Businesses Need to Know in 2026. https://phl-firm.com/generative-ai-insurance-exclusions-2026/
  39. Digital Bricks, “The Change to the EU AI Act That No One Is Talking About.” https://www.digitalbricks.ai/blog-posts/the-change-to-the-eu-ai-act-that-no-one-is-talking-about
  40. Sidley Austin, “Advisor to the CJEU Confirms GDPR Fines for Subsidiary Infringements Should Reflect Group Turnover.” https://datamatters.sidley.com/2024/10/04/advisor-to-the-cjeu-confirms-gdpr-fines-for-subsidiary-infringements-should-reflect-group-turnover/

Discover more from Ajith Vallath Prabhakar

Subscribe to get the latest posts sent to your email.