Binding deadline
August
2026
High-risk AI & GPAI obligations
EU AI Act · Proprietary Platform

EU AI Act
Compliance Engine
Structured obligation mapping
across the full regulatory architecture.

A jurisdiction-aware compliance platform built for legal and regulatory teams operating under the EU AI Act — from risk classification and obligation mapping to GPAI model requirements and national implementation variances.

Scroll
Platform Overview

Compliance is not a
checklist.
It is an architecture problem.

The EU AI Act introduces a tiered regulatory framework that applies differently depending on system function, deployment context, sector, and whether an organisation is a provider, deployer, importer, or distributor. Generic compliance tooling is not built for this heterogeneity.

The EU AI Act Compliance Engine is a structured obligation-mapping platform designed to produce traceable, auditable compliance outputs — not checkbox confirmations. The platform translates the Act's regulatory architecture into jurisdiction-aware workflows that map obligations to organisational roles, system risk categories, and applicable national implementation measures.

Built with awareness of the Code of Practice for GPAI models, the European AI Office's enforcement framework, and the evolving guidance from national competent authorities across EU member states, the platform is designed for legal, compliance, and regulatory affairs teams that need structured outputs defensible under regulatory scrutiny.

Key compliance deadline
August 2026

Full compliance obligations apply to high-risk AI systems under Annex III and to general-purpose AI model providers. Earlier phase-in deadlines are already in force for prohibited practices (February 2025) and AI literacy obligations.

€35M
Max fine for
prohibited practices
8+
Distinct obligation
categories for GPAI
27
Member state
implementations
13
High-risk sectors
under Annex III
Risk Classification Architecture

Four tiers. Asymmetric obligations.<br>Significant compliance surface area.

The EU AI Act structures compliance obligations across a risk-based taxonomy that determines what requirements apply, to whom, and on what timeline. The platform maps your AI systems against this taxonomy and produces structured obligation profiles for each system and each organisational role.

Prohibited

Unacceptable Risk

AI practices that pose unacceptable risks to fundamental rights, safety, and the rule of law are banned outright. In force as of February 2025.

Examples
  • Subliminal manipulation systems
  • Real-time remote biometric ID in public spaces
  • Social scoring by public authorities
  • Exploitation of vulnerable groups
High Risk

High-Risk Systems

AI systems embedded in critical infrastructure, employment, education, essential services, law enforcement, and administration of justice. Full compliance obligations apply from August 2026.

Examples
  • Recruitment & HR management
  • Credit scoring & insurance
  • Border control & migration
  • Administration of justice
Limited Risk

Limited-Risk Systems

Specific transparency obligations apply — primarily disclosure requirements when a person interacts with an AI system, views AI-generated content, or is subject to AI-assisted decision-making.

Examples
  • Chatbots & virtual assistants
  • Deepfake content systems
  • Emotion recognition tools
  • Biometric categorisation
Minimal Risk

Minimal-Risk Systems

Most AI applications — spam filters, AI-enabled video games, recommendation systems — fall here. No mandatory compliance requirements under the Act, but voluntary codes of conduct are encouraged.

Examples
  • AI-enabled content filters
  • Recommendation engines
  • Inventory management AI
  • General-purpose productivity tools
Compliance Framework

The Act does not impose a single set of requirements. It creates layered, role-specific obligations that differ by system function, deployment context, and organisational position in the AI value chain.

The platform maps obligations to the correct organisational actor — provider, deployer, importer, distributor — and produces structured compliance outputs keyed to each applicable requirement.

Outputs are designed to be defensible under regulatory scrutiny: traceable reasoning, jurisdiction-aware obligation identification, and structured documentation aligned to the Act's technical documentation and conformity assessment requirements.

01 — Provider obligations
Risk Management & Conformity Assessment

Technical documentation, conformity assessment procedures, CE marking, and registration in the EU database for AI. The platform maps each obligation to the applicable annex and generates structured documentation frameworks.

02 — Data governance
Training Data & Bias Management

High-risk AI systems must operate on training, validation, and testing datasets that meet specific quality criteria. The platform produces data governance documentation aligned to Article 10 requirements.

03 — Transparency
Disclosure & Human Oversight

Instructions for use, transparency obligations toward deployers, and human oversight design requirements. Mapped to the correct obligation tier and role across Articles 13, 14, and 50.

04 — Post-market monitoring
Incident Reporting & Surveillance

Continuous monitoring obligations, serious incident reporting to national authorities, and market surveillance cooperation. The platform generates structured monitoring frameworks keyed to system risk classification.

05 — Deployer obligations
Fundamental Rights Impact Assessment

Deployers of high-risk AI systems under Annex III must conduct FRIAs before deployment. The platform structures FRIA workflows and documents the analysis in a format aligned to supervisory expectations.

General-Purpose AI Models

GPAI model obligations introduce a distinct compliance architecture for frontier AI providers.

General-purpose AI models — and separately, GPAI models with systemic risk — are subject to a dedicated obligation regime under Title VIII of the Act. The thresholds, the Code of Practice, and the European AI Office's enforcement role create compliance requirements that do not map neatly to the high-risk system framework.

The Code of Practice for GPAI models is the primary instrument through which providers will demonstrate compliance. Its implementation is not optional. It is the enforcement framework.

The platform maps GPAI obligations to the Code of Practice structure, tracking the iterative drafting process and translating the AI Office's evolving guidance into structured compliance workflows for model providers and downstream deployers.

Technical documentation All GPAI model providers must prepare and maintain technical documentation covering training methodology, capabilities, and limitations before placing the model on the EU market.
Copyright compliance Providers must implement policies to comply with EU copyright law — including the text and data mining exceptions under the DSM Directive — and must publish summaries of training data used.
Downstream transparency Information and documentation must be made available to downstream providers who integrate GPAI models into their own systems, enabling their compliance with applicable obligations.
Systemic risk: adversarial testing GPAI models with systemic risk (trained on compute exceeding 10²⁵ FLOPs) must undergo model evaluations, adversarial testing, and incident reporting to the AI Office.
Systemic risk: cybersecurity Adequate cybersecurity protection measures are mandatory for systemic-risk models, with obligations to report serious incidents and to cooperate with AI Office investigations.
Code of Practice Adherence to the GPAI Code of Practice creates a rebuttable presumption of compliance. The platform tracks the Code's implementation and maps its provisions to underlying statutory obligations.
Compliance Timeline

The Act entered into force in August 2024. Obligations apply on a phased schedule.

The EU AI Act's implementation is structured across a multi-year phase-in timeline. Not all obligations apply on the same date. The platform tracks applicable deadlines by system type and organisational role.

August 2024
Entry into force

The EU AI Act entered into force across all EU member states. The 36-month full-implementation clock began. Governance and enforcement body establishment commenced.

February 2025
Prohibited practices in force — AI literacy obligations apply

Chapter II prohibitions on unacceptable-risk AI practices became binding. All providers and deployers subject to AI literacy obligations for staff who operate or oversee AI systems.

In force
August 2025
GPAI model obligations — Code of Practice

Full obligations for general-purpose AI model providers apply. The Code of Practice for GPAI models, developed under AI Office supervision, is the primary compliance instrument for providers.

Imminent
August 2026
High-risk AI systems — full compliance required

Full compliance obligations for high-risk AI systems under Annexes I and III. Conformity assessments, technical documentation, registration in the EU AI database, and post-market monitoring all apply.

Primary deadline
August 2027
Annex I high-risk systems (product safety legislation)

Extended compliance timeline for AI systems already subject to EU product safety legislation under Annex I — including machinery, medical devices, and vehicles — to allow alignment with existing conformity regimes.

Designed For

The platform is built for organisations operating across the EU AI Act's compliance perimeter.

The Act creates distinct obligations for providers, deployers, importers, and distributors. The platform is structured to address each organisational role — not to produce a single output for all actors simultaneously.

Engagements are accepted on a mandate basis. The platform is available under licensing arrangements for in-house legal and compliance teams, and as part of broader advisory mandates where the firm provides ongoing regulatory counsel.

Technology companies AI system developers and deployers building products for EU markets — whether established in the EU or placing AI systems on the EU market from third countries. Obligation mapping for both provider and deployer roles.
GPAI model providers Foundation model and large language model developers subject to GPAI obligations under Title VIII — including Code of Practice compliance, technical documentation, and systemic risk assessment where applicable.
Enterprise deployers Organisations deploying third-party AI systems in high-risk contexts — including HR, credit, healthcare, and public services — who bear deployer obligations independent of the original system provider.
In-house legal & compliance Legal, regulatory affairs, and compliance teams who need structured outputs for internal governance, board reporting, regulator correspondence, and audit-ready documentation.
Financial & insurance sector Financial institutions and insurers deploying AI in credit scoring, underwriting, fraud detection, and customer assessment — sectors explicitly named under Annex III high-risk classification.
Platform Access

Early access and
licensing arrangements
are available now.

The EU AI Act Compliance Engine is available under licensing arrangements for legal and compliance teams, and as part of broader advisory mandates. To discuss access, explore a licensing arrangement, or enquire about the platform's applicability to a specific compliance challenge, contact directly.

contact@andresizquierdo.com
Platform EU AI Act Compliance Engine
Firm Izquierdo Advisory — AI Governance & Intellectual Property
Location 4300 Nebraska Ave NW
Washington, DC 20016
Coverage European Union · United States · Latin America
The platform is one of three proprietary compliance systems developed by Izquierdo Advisory. The US Deepfake Navigation System and the Music Metadata Identification & Attribution Tool address adjacent regulatory domains. View all platforms →