A jurisdiction-aware compliance platform built for legal and regulatory teams operating under the EU AI Act — from risk classification and obligation mapping to GPAI model requirements and national implementation variances.
The EU AI Act introduces a tiered regulatory framework that applies differently depending on system function, deployment context, sector, and whether an organisation is a provider, deployer, importer, or distributor. Generic compliance tooling is not built for this heterogeneity.
The EU AI Act Compliance Engine is a structured obligation-mapping platform designed to produce traceable, auditable compliance outputs — not checkbox confirmations. The platform translates the Act's regulatory architecture into jurisdiction-aware workflows that map obligations to organisational roles, system risk categories, and applicable national implementation measures.
Built with awareness of the Code of Practice for GPAI models, the European AI Office's enforcement framework, and the evolving guidance from national competent authorities across EU member states, the platform is designed for legal, compliance, and regulatory affairs teams that need structured outputs defensible under regulatory scrutiny.
Full compliance obligations apply to high-risk AI systems under Annex III and to general-purpose AI model providers. Earlier phase-in deadlines are already in force for prohibited practices (February 2025) and AI literacy obligations.
The EU AI Act structures compliance obligations across a risk-based taxonomy that determines what requirements apply, to whom, and on what timeline. The platform maps your AI systems against this taxonomy and produces structured obligation profiles for each system and each organisational role.
AI practices that pose unacceptable risks to fundamental rights, safety, and the rule of law are banned outright. In force as of February 2025.
AI systems embedded in critical infrastructure, employment, education, essential services, law enforcement, and administration of justice. Full compliance obligations apply from August 2026.
Specific transparency obligations apply — primarily disclosure requirements when a person interacts with an AI system, views AI-generated content, or is subject to AI-assisted decision-making.
Most AI applications — spam filters, AI-enabled video games, recommendation systems — fall here. No mandatory compliance requirements under the Act, but voluntary codes of conduct are encouraged.
The Act does not impose a single set of requirements. It creates layered, role-specific obligations that differ by system function, deployment context, and organisational position in the AI value chain.
The platform maps obligations to the correct organisational actor — provider, deployer, importer, distributor — and produces structured compliance outputs keyed to each applicable requirement.
Outputs are designed to be defensible under regulatory scrutiny: traceable reasoning, jurisdiction-aware obligation identification, and structured documentation aligned to the Act's technical documentation and conformity assessment requirements.
Technical documentation, conformity assessment procedures, CE marking, and registration in the EU database for AI. The platform maps each obligation to the applicable annex and generates structured documentation frameworks.
High-risk AI systems must operate on training, validation, and testing datasets that meet specific quality criteria. The platform produces data governance documentation aligned to Article 10 requirements.
Instructions for use, transparency obligations toward deployers, and human oversight design requirements. Mapped to the correct obligation tier and role across Articles 13, 14, and 50.
Continuous monitoring obligations, serious incident reporting to national authorities, and market surveillance cooperation. The platform generates structured monitoring frameworks keyed to system risk classification.
Deployers of high-risk AI systems under Annex III must conduct FRIAs before deployment. The platform structures FRIA workflows and documents the analysis in a format aligned to supervisory expectations.
General-purpose AI models — and separately, GPAI models with systemic risk — are subject to a dedicated obligation regime under Title VIII of the Act. The thresholds, the Code of Practice, and the European AI Office's enforcement role create compliance requirements that do not map neatly to the high-risk system framework.
The Code of Practice for GPAI models is the primary instrument through which providers will demonstrate compliance. Its implementation is not optional. It is the enforcement framework.
The platform maps GPAI obligations to the Code of Practice structure, tracking the iterative drafting process and translating the AI Office's evolving guidance into structured compliance workflows for model providers and downstream deployers.
The EU AI Act's implementation is structured across a multi-year phase-in timeline. Not all obligations apply on the same date. The platform tracks applicable deadlines by system type and organisational role.
The EU AI Act entered into force across all EU member states. The 36-month full-implementation clock began. Governance and enforcement body establishment commenced.
Chapter II prohibitions on unacceptable-risk AI practices became binding. All providers and deployers subject to AI literacy obligations for staff who operate or oversee AI systems.
In forceFull obligations for general-purpose AI model providers apply. The Code of Practice for GPAI models, developed under AI Office supervision, is the primary compliance instrument for providers.
ImminentFull compliance obligations for high-risk AI systems under Annexes I and III. Conformity assessments, technical documentation, registration in the EU AI database, and post-market monitoring all apply.
Primary deadlineExtended compliance timeline for AI systems already subject to EU product safety legislation under Annex I — including machinery, medical devices, and vehicles — to allow alignment with existing conformity regimes.
The Act creates distinct obligations for providers, deployers, importers, and distributors. The platform is structured to address each organisational role — not to produce a single output for all actors simultaneously.
Engagements are accepted on a mandate basis. The platform is available under licensing arrangements for in-house legal and compliance teams, and as part of broader advisory mandates where the firm provides ongoing regulatory counsel.
The EU AI Act Compliance Engine is available under licensing arrangements for legal and compliance teams, and as part of broader advisory mandates. To discuss access, explore a licensing arrangement, or enquire about the platform's applicability to a specific compliance challenge, contact directly.
contact@andresizquierdo.com