AskAjay.ai
Trust & Responsible AI18 min read · September 5, 2024

The EU AI Act: A Strategic Guide for AI Leaders

A strategic framework for navigating the EU AI Act built from real compliance engagements. Introduces the Compliance Surface Area Model across three dimensions: risk classification, model type, and supply chain position.

Most coverage of the EU AI Act reads like a regulatory summary. Here's the strategic framework I use with clients — built from real compliance engagements across three jurisdictions, not from reading the legislation once.

Ajay Pundhir
Ajay PundhirAI Strategist & Speaker
Share
Trust & Responsible AI

The EU AI Act: A Strategic Guide for AI Leaders

Key Takeaways

  • EU AI Act penalties exceed GDPR — up to EUR 35M or 7% of global turnover
  • Compliance obligation is the product of risk tier, model type, and supply chain position
  • Extraterritorial reach means it applies regardless of company headquarters
  • Prohibited practices are already in force since February 2025
  • Most organizations misclassify their risk tier by ignoring GPAI model provisions

Three weeks before the EU AI Act's prohibited practices ban took effect in February 2025, I got a call from a Series C healthtech company in Berlin. They'd built a patient triage system using an LLM fine-tuned on clinical notes. Solid product. Strong retention. One problem: their emotion recognition module — used to assess patient distress levels from voice patterns during intake calls — was about to become illegal under Article 5.

They had six weeks of engineering time to either remove the module or shut down in the EU. 'We didn't think this applied to us,' the CTO told me. 'We're healthcare, not surveillance.' That distinction doesn't exist in the Act. I hear some version of this conversation every month now.

The Stakes Are Already Real

€35MMaximum fine for deploying prohibited AI practices — or 7% of global turnover, whichever is higher
Source: EU AI Act, Article 99. Penalties apply from February 2025 for prohibited practices.

The EU AI Act is not a policy paper. It's enforceable law — published in the Official Journal in July 2024, with its first prohibitions already in force as of February 2025. The maximum penalty for deploying a banned AI practice is €35 million or 7% of worldwide annual turnover, whichever is higher. For context: that penalty ceiling exceeds GDPR's. And unlike GDPR, which took years before serious enforcement began, the European AI Office was established specifically to enforce AI-specific obligations from day one.

I need to say this directly: if you deploy AI systems that serve EU citizens — regardless of where your company is headquartered — this regulation applies to you. Extraterritorial reach. No exemptions for startups. No grace period for systems already in production. The compliance clock is running.

€35MMaximum fine for prohibited AI
500M+People protected under the Act
Aug 2026Full high-risk compliance deadline
7%Global turnover penalty ceiling

The Compliance Surface Area Model

Here's the framework I use with clients to cut through the Act's 180 pages. I call it the Compliance Surface Area Model. Most organisations try to understand the EU AI Act by reading it section-by-section. That's like trying to understand a city by reading its building codes. You need a map.

The model has three dimensions. Every AI system you operate can be located in this space, and its position tells you exactly what compliance obligations apply:

  1. Dimension 1 — Risk Classification: Where does your system fall in the four-tier hierarchy? Unacceptable (banned), High-Risk (heavily regulated), Limited Risk (transparency obligations), or Minimal Risk (no new obligations). This determines the floor of your compliance requirements.
  2. Dimension 2 — Model Type: Does your system use a General-Purpose AI model? If so, the GPAI provisions layer additional obligations on top of your risk classification — regardless of tier. A minimal-risk chatbot powered by a GPAI model with systemic risk still triggers GPAI compliance.
  3. Dimension 3 — Supply Chain Position: Are you the provider (developed the AI), the deployer (uses someone else's AI in your product), or the importer/distributor? Each role has different obligations under the same risk tier. This is where most organisations get confused — and where the most compliance gaps hide.

Your compliance obligation is the product of these three dimensions — not any single one. A high-risk system using a GPAI model, deployed by a third party, has a fundamentally different compliance surface than a high-risk system using a narrow model that you built in-house. The Act treats them differently. Your compliance programme must too.

The Four Risk Tiers: What They Actually Mean

Every guide you've read about the EU AI Act lists the four risk tiers. What they don't tell you is how to think about them strategically. The tiers aren't just a classification exercise — they're a strategic constraint that shapes what you can build, how you build it, and what it costs to maintain.

  1. Step 1Unacceptable
  2. Step 2High Risk
  3. Step 3Limited Risk
  4. Step 4Minimal Risk

Risk Tier Analysis

Strategic implications beyond the regulatory text

BANNED

Eight categories of AI are outright banned. The ones that catch my clients off guard: emotion recognition in workplaces (yes, even for 'employee wellbeing' tools), social scoring systems (even internal 'trust scores' or gamified performance metrics), and untargeted facial image scraping. The law doesn't care about your intent. If your system's technical architecture falls within these definitions, it's prohibited. The strategic implication: audit every AI system for unintentional overlap with prohibited categories. I've found prohibited practices hiding in HR analytics suites, customer engagement platforms, and — in the Berlin case — healthcare intake tools. These weren't designed as surveillance tools. They became them.

GPAI: The Regulatory Layer Most People Miss

The Act introduces a separate regulatory track for General-Purpose AI models — the foundation models that power everything from chatbots to code generation. This is the first legislation globally to regulate AI at the model level, not just the application level. The implications are significant and widely misunderstood.

Systemic RiskFrontier ResearchStandard GPAINarrow ModelsDeployment Breadth →↑ Compute (FLOPs)GPT-4 classGemini UltraLlama 3 70BMistral LargeDomain Models

Standard GPAI Obligations

Every GPAI model, regardless of size, must meet three baseline requirements: (1) technical documentation describing training methodology and capabilities, (2) a published summary of training data — including how copyright compliance was handled, and (3) a policy for respecting EU copyright law. That third requirement is already generating litigation.

Systemic Risk: The 10²⁵ FLOPs Threshold

Models trained with more than 10²⁵ floating-point operations are automatically classified as posing 'systemic risk.' The Commission can also designate models below this threshold based on capability assessments. Systemic-risk models face additional obligations: comprehensive model evaluations, systemic risk assessment and mitigation, serious incident reporting to the European AI Office, and high-level cybersecurity protections.

Here's what this means strategically: if you build on top of a systemic-risk GPAI model (GPT-4, Gemini Ultra, Claude), your compliance obligations don't disappear because someone else trained the model. The Act creates a shared responsibility chain. The model provider handles GPAI obligations. You handle deployment obligations. But you need contractual assurance that the provider is compliant — and you need to document that assurance for your own conformity assessment.

The GPAI provisions create a compliance supply chain. If your model provider isn't compliant, you inherit their regulatory exposure. This is the AI equivalent of a vendor risk management failure — and it's the gap I find in 70% of deployments I audit.

The Compliance Readiness Gap

The IAPP's 2025 AI Governance Report found that fewer than 20% of organisations subject to the EU AI Act have completed risk classification of their AI systems. Fewer than 15% have implemented the documentation frameworks the Act requires. The gap between what the law demands and what organisations have built is massive — and it's closing fast.

The radar tells the story. Risk classification and documentation are where most organisations have made progress — these are familiar exercises from GDPR. But human oversight mechanisms, GPAI compliance, and incident reporting? Those are AI-specific requirements with no GDPR equivalent. They require new capabilities, new roles, and new infrastructure. If you've built your compliance programme around the Minimum Viable Governance framework, you have a head start — the MVG's three tiers map directly to the Act's risk classification.

Where the Compliance Burden Falls

One of the most common questions I get from CTOs: 'Which team owns AI Act compliance?' The answer — all of them — is unhelpful. Here's the actual distribution of compliance burden across a typical organisation:

Risk MgmtDocumentationOversightTestingEngineering98510Legal8974Product7697HR6583Marketing4365

The heatmap reveals what the org chart doesn't: engineering carries the heaviest load (risk management systems and conformity testing), but legal owns documentation, and product owns human oversight design. HR is more exposed than most organisations expect — the Act's employment provisions affect recruitment AI, performance management tools, and workforce analytics. Marketing's exposure is lower but real: transparency obligations for chatbots, content generation disclosures, and deepfake labelling.

The coordination challenge is the real compliance risk. I've watched organisations where engineering built robust testing frameworks but legal hadn't created the documentation templates to capture the results. Or where product designed human oversight mechanisms that HR couldn't operationalise. The Act doesn't care which department dropped the ball. Non-compliance is non-compliance. For the governance structure that coordinates across these functions, see the Governance Playbook.

The Implementation Clock

The Act rolls out in phases. Understanding this timeline isn't academic — it's operational. Each phase triggers specific obligations, and the penalties for each phase are already enforceable.

Feb 2025
Prohibited practices ban
Aug 2025
GPAI rules apply
Aug 2026
Full high-risk enforcement
2027+
Embedded products deadline

Detailed Compliance Milestones

Each deadline is enforceable with penalties

LIVE
February 2, 2025 — Prohibitions

The ban on unacceptable-risk AI practices took effect. Eight categories of AI are now illegal in the EU: social scoring, manipulative AI, untargeted facial scraping, emotion recognition in workplaces and schools, biometric categorisation by sensitive attributes, real-time remote biometric ID in public spaces (with narrow exceptions), predictive policing of individuals, and AI exploiting vulnerable groups. If you're still running any of these: stop. Today.

IMMINENT
August 2, 2025 — GPAI Rules

Obligations for General-Purpose AI models become applicable. Every GPAI model must have technical documentation, a training data summary, and a copyright compliance policy. Systemic-risk models face additional evaluation, mitigation, incident reporting, and cybersecurity requirements. The European AI Office oversees enforcement directly — not national authorities.

DEADLINE
August 2, 2026 — Full Enforcement

The complete set of high-risk AI system requirements becomes enforceable. Risk management, data governance, documentation, conformity assessment, human oversight, accuracy standards, cybersecurity — all mandatory. National competent authorities in each member state handle enforcement. Regulatory sandboxes must be operational in every EU country.

EXTENDED
August 2, 2027 — Embedded Products

Extended deadline for high-risk AI systems that are safety components of products regulated under existing EU sectoral legislation (medical devices, vehicles, aviation, etc.). These systems get an additional year because they must also comply with sector-specific conformity procedures.

The Business Case: Compliance as Strategic Leverage

Let me reframe this for the CFOs in the room. The EU AI Act is not just a cost. It's a market-shaping event that creates winners and losers.

The tornado chart shows the asymmetry. Downside risk from non-compliance ranges from €7.5M (GPAI violations) to €35M (prohibited practices). The investment required for robust compliance — my estimate across a dozen engagements — averages €1-2M for a mid-size enterprise. The upside? Organisations with demonstrable AI Act compliance are already commanding trust premiums in enterprise sales cycles, particularly in financial services, healthcare, and public sector procurement.

Compliance isn't the cost of doing business in the EU. It's the price of admission to the world's most valuable regulated market — and the trust premium you build travels with you to every other jurisdiction.

Ajay Pundhir

This isn't speculation. The same dynamic played out with GDPR. Organisations that invested early in data protection infrastructure — Salesforce, Microsoft, SAP — used their compliance as a competitive differentiator that outlasted the regulation's novelty. The OECD AI Principles and the G7 Hiroshima AI Process are converging on the same standards. EU compliance today positions you for global compliance tomorrow.

The 90-Day Compliance Sprint

Here's the phased approach I use with clients to build EU AI Act compliance infrastructure in 90 days. It won't make you fully compliant — that's a longer journey. But it will close the highest-risk gaps and give you a defensible position if enforcement comes early.

90-Day Sprint Architecture

📋
Days 1-14
Inventory

Map every AI system. Classify by risk tier, model type, and supply chain position.

🎯
Days 15-30
Triage

Identify prohibited practices (immediate action) and high-risk systems (priority compliance).

🏗️
Days 31-60
Build

Documentation frameworks, risk management systems, human oversight mechanisms.

🧪
Days 61-90
Test

Conformity assessment dry runs. GPAI vendor audit. Incident response tabletop.

Phase 1: Inventory and Classification (Days 1-14)

Start with what you don't know. Every engagement I've run has uncovered AI systems the compliance team didn't know existed — shadow AI adopted by business units, third-party tools with embedded AI features, legacy systems with ML components nobody documented. Your first task is a complete inventory, classified across all three dimensions of the Compliance Surface Area Model.

  1. System inventory: Every AI system, including third-party tools with AI features. Don't forget HR platforms, customer service tools, and marketing automation — these are common sources of undocumented AI.
  2. Risk classification: Map each system to one of the four risk tiers. Be conservative — if a system could be high-risk under certain use cases, classify it as high-risk.
  3. GPAI assessment: For every system using a foundation model, document which model, who provides it, and whether the provider has published their GPAI compliance documentation.
  4. Supply chain mapping: For each system, document whether you are the provider, deployer, importer, or distributor. This determines your specific obligations.

Phase 2: Triage and Immediate Action (Days 15-30)

Prohibited practices first. If anything in your inventory falls under Article 5's eight banned categories, remediate immediately. Not next quarter. Now. The penalties are already enforceable.

Then prioritise high-risk systems by exposure: systems with the most users, the most sensitive data, and the most direct impact on individuals. These are your first conformity assessment candidates. For each, begin building the required risk management documentation. The AI Use Case Canvas is useful here — it gives you a structured evaluation of each system's risk-reward profile.

Phase 3: Build Compliance Infrastructure (Days 31-60)

  1. Risk management system: A continuous, iterative process — not a one-time assessment. The Act requires ongoing identification, analysis, estimation, and evaluation of risks. Build this as a living system with quarterly review cycles.
  2. Documentation framework: Technical documentation, instructions for use, conformity declarations, quality management system documentation. Standardise templates now — you'll need them for every high-risk system.
  3. Human oversight design: Meaningful human control, not rubber-stamp review. The Act requires that humans can understand the AI's outputs, can decide not to use the system, and can override or reverse outputs. Design these mechanisms at the product level.
  4. Data governance: Training, validation, and testing data must meet quality criteria — representativeness, accuracy, completeness. If you've built strong data governance under GDPR, extend it. If not, the GDPR compliance guide has the foundation.

Phase 4: Test and Validate (Days 61-90)

Run conformity assessment dry runs on your highest-risk systems. Document the results even when they fail — especially when they fail. Audit your GPAI vendors against the Act's transparency requirements. Run an incident response tabletop specifically for AI-related incidents (model failures, biased outputs, data leakage through inference). The output of this phase: a compliance status report for each AI system, a remediation plan for gaps, and an incident response playbook tested against realistic scenarios.

Don't aim for perfection in 90 days. Aim for defensibility. If the European AI Office or a national authority audits you in August 2025, you want to demonstrate good faith effort, documented progress, and a credible plan to close remaining gaps. That's the difference between a warning and a €35M fine.

The Global Convergence

The EU AI Act doesn't exist in a vacuum. The OECD AI Principles, the G7 Hiroshima AI Process Code of Conduct, and the NIST AI Risk Management Framework are converging on the same core requirements: risk management, transparency, human oversight, and accountability. The EU is just the first to make them legally binding.

Regulatory Convergence

How global frameworks align with EU AI Act requirements

EU AI ActG7 CodeNIST AI RMF
Risk classification

Mandatory 4-tier

Voluntary risk-based

Framework (Govern)

Transparency

Legal requirement

Principle 4

Map function

Human oversight

Mandatory for HR

Principle 6

Govern function

Incident reporting

Mandatory

Voluntary

Respond function

GPAI/Foundation models

Regulated

Addressed

Not specific

Enforcement

€35M / 7% turnover

No penalties

No penalties

The convergence pattern is clear: voluntary frameworks today become mandatory requirements tomorrow. Organisations that build to the EU AI Act standard aren't over-investing — they're future-proofing. The G7 Code of Conduct's eleven principles map almost perfectly to the Act's requirements. NIST's four functions (Govern, Map, Measure, Manage) provide the operational scaffolding to implement them.

If you operate globally, the 5-Pillar AI Readiness Assessment gives you a structured way to evaluate readiness across jurisdictions. And if you need to handle the GDPR layer underneath the AI Act, the AI and GDPR Compliance Guide maps the intersection in detail.

From Regulation to Strategy

The Berlin healthtech company? They removed the emotion recognition module, rebuilt patient distress assessment using structured questionnaire inputs instead of voice analysis, and launched a compliance programme using the Compliance Surface Area Model. The product is better now — patients prefer explicit assessment over being analysed without knowing it. Revenue is up 23% since the change. Compliance forced a product decision that the market rewarded.

That's the pattern I see in every successful EU AI Act engagement. The regulation doesn't just constrain — it clarifies. It forces you to document what your AI actually does, how it makes decisions, and what happens when it fails. Organisations that treat this as a strategic exercise, not a legal checkbox, come out stronger.

Your action item: complete Phase 1 of the compliance sprint this week. Inventory your AI systems, classify them by risk tier, and identify anything that might fall under Article 5's prohibitions. If you find prohibited practices — or if you're not sure — that's the advisory conversation. I've helped a dozen organisations navigate exactly this assessment, and the difference between proactive compliance and reactive scrambling is the difference between a competitive edge and a crisis.

The healthtech CTO in Berlin told me something in our last call that I keep coming back to: 'The Act didn't slow us down. It showed us where we were building on assumptions instead of evidence.' I've found that to be true in every engagement. The EU AI Act is the most comprehensive AI regulation in the world. It's also, paradoxically, one of the most useful strategic tools available to any organisation serious about building AI that lasts.


Ajay Pundhir
Ajay Pundhir

Senior AI strategist helping leaders make AI real across four continents. Forbes Technology Council member, IEEE Senior Member.

Let's Talk

Get Weekly Thinking

Join 2,500+ leaders who start their week with original AI insights.