What AI Can Do on These Platforms
Our AI systems are designed specifically for AI governance and intellectual property compliance. They can:
- Map your AI systems against EU AI Act risk classification criteria across all four tiers
- Generate structured obligation checklists for GPAI model providers, high-risk system operators, and deployers
- Identify applicable US state-level synthetic media statutes based on your jurisdiction, industry, and use case
- Produce music metadata and rights attribution documentation frameworks based on your catalogue and deployment context
- Summarise recent regulatory developments, legislative changes, and enforcement actions
- Generate draft compliance documentation, internal policy templates, and audit-readiness checklists
These outputs are designed to be substantive, current, and professionally structured. They draw on a curated regulatory knowledge base maintained by Izquierdo Advisory.
What AI Cannot Do — Critical Limitations
Regulatory filings · Court submissions · Board certifications · Formal compliance declarations · Investor due diligence representations · Contract warranties regarding legal compliance
LLM systems have known limitations that are particularly consequential in legal and regulatory contexts:
- Hallucination: AI models can generate plausible-sounding but factually incorrect legal citations, statute references, or regulatory requirements
- Knowledge cutoff: Training data has a cutoff date. Regulatory changes, new enforcement decisions, or legislative updates after that date may not be reflected
- Jurisdiction gaps: Local implementation of framework regulations (such as EU member state transpositions) may vary in ways not captured by the model
- No professional judgment: AI cannot exercise the professional judgment, contextual interpretation, or strategic advice that characterises qualified legal counsel
- No privilege: AI-generated documents do not attract legal professional privilege. Communications with the platform are not confidential in the same way as communications with an attorney
- No accountability: AI outputs cannot be signed, certified, or submitted in place of attorney-certified work product
AI Output vs. Attorney Review — Comparison
| Capability | AI Platform Output | + Attorney Review Service |
|---|---|---|
| Regulatory mapping & analysis | ✓ Included | ✓ Included + verified |
| Obligation checklist generation | ✓ Included | ✓ Reviewed & certified |
| Human attorney review | ✗ Not included | ✓ Qualified EU IP attorney |
| Legal professional privilege | ✗ Not applicable | ✓ May apply (jurisdiction-specific) |
| Suitable for regulatory submission | ✗ Not recommended | ✓ Yes, with attorney certification |
| Attorney certification / sign-off | ✗ Not included | ✓ Included |
| Jurisdiction-specific local law advice | ~ Indicative only | ✓ Confirmed by qualified counsel |
| Creates attorney-client relationship | ✗ No | ✓ Yes, for scope of review |
EU AI Act Compliance — Specific Notice
The EU AI Act (Regulation (EU) 2024/1689) imposes binding legal obligations on providers, operators, and deployers of AI systems. The Izquierdo Advisory EU AI Act Compliance Engine is itself an AI system used in a professional compliance context.
In accordance with the spirit of Article 52 of the EU AI Act (transparency obligations for AI systems interacting with natural persons), Izquierdo Advisory discloses that the outputs of its compliance platforms are AI-generated. Users are informed that they are interacting with an AI system and not a human legal professional.
Platform outputs used in preparing conformity assessments, technical documentation, or regulatory declarations required under the EU AI Act should be reviewed and certified by a qualified EU-admitted attorney before submission. The Attorney Review Service is designed for exactly this purpose.
How We Maintain Quality
Izquierdo Advisory takes the accuracy and reliability of AI outputs seriously. Our quality framework includes:
- Curated regulatory knowledge bases maintained by Andrés Izquierdo and updated on a rolling basis as regulations change
- Structured prompt architectures designed to minimise hallucination and constrain outputs to verified regulatory sources
- Output validation processes against primary regulatory texts before deployment
- Clear limitations and confidence indicators built into platform outputs
- Human attorney review available as an add-on for all platform outputs
These measures reduce — but do not eliminate — the risk of error. We remain committed to transparency about the nature and limitations of our technology.
Questions & Contact
If you have questions about the AI systems powering our platforms, the basis for any specific output, or how to engage the Attorney Review Service, please contact: info@andresizquierdo.com