Skip to main content
EU AI Act

EU AI Act Self-Assessment for AI Agents

Is your AI agent fleet ready for August 2026? Answer each question below to assess your organization's readiness across identity controls, logging, human oversight, and incident response.

Your Score

0 / 29
Critical gapsImmediate action needed.
0 Yes·0 No·0 Not Sure
1

Agent Classification & Registration

Annex III
Identity Pillar

Do you know which of your AI agents qualify as high-risk under the EU AI Act?

The Act classifies agents by risk tier — high-risk agents face the strictest requirements.

Have you documented each agent's purpose, scope, and operational boundaries?

Clear documentation of what each agent does and where it operates is foundational to compliance.

Do you maintain a current inventory or registry of all production agents?

A living registry ensures no agent is deployed without oversight.

Is there a named human operator responsible for each agent?

The Act requires a natural or legal person accountable for each high-risk system.

2

Identity & Traceability

Article 13 — Transparency
Identity Pillar

Does every agent authenticate with its own unique cryptographic identity (not shared API keys)?

Shared keys make it impossible to attribute actions to a specific agent.

Are agent credentials scoped to only the specific capabilities each agent needs?

Least-privilege scoping limits blast radius if an agent is compromised.

Is key rotation and lifecycle management automated for your agents?

Manual rotation leads to stale credentials and gaps in coverage.

Can you trace any agent request back to a specific registered agent identity?

End-to-end traceability is critical for transparency and accountability.

3

Risk Management

Article 9
Policy Pillar

Have you conducted a risk assessment for each high-risk agent?

Article 9 mandates a documented risk management process for high-risk AI systems.

Are known limitations and failure modes documented for each agent?

Users and operators must be informed of what the system cannot do reliably.

Do your systems fail closed (deny on error) rather than fail open?

Fail-open defaults can allow uncontrolled behavior during outages.

Are residual risks monitored and reviewed on a regular schedule?

Risk management is continuous — not a one-time checkbox exercise.

4

Technical Documentation & Logging

Article 12 — Record-Keeping
Compliance Pillar

Are all agent decisions logged automatically without manual intervention?

Automatic logging ensures no decision goes unrecorded.

Are your audit logs tamper-evident (using cryptographic hash chains or equivalent)?

Tamper-evident logs prove records haven't been altered after the fact.

Is personally identifiable information (PII) automatically sanitized from audit records?

GDPR and the AI Act both require protecting personal data in logs.

Are logs retained for at least the minimum required period (typically 6–12 months)?

Retention periods vary by jurisdiction and risk classification.

Can you export audit evidence for regulators or auditors on demand?

Regulators may request evidence at any time — export should be fast and reliable.

5

Human Oversight

Article 14
Policy Pillar

Do human-in-the-loop controls exist for high-risk agent decisions?

Human oversight is non-negotiable for high-risk systems under the Act.

Can you pause or deactivate any agent immediately if needed?

Kill-switch capability is an explicit requirement for high-risk AI.

Is policy enforcement handled at the infrastructure level so agents cannot bypass rules?

Agent-side enforcement can be circumvented — infrastructure-level controls cannot.

Are override and intervention procedures documented and accessible?

Operators must know how to intervene — and the procedures must be tested.

6

Accuracy, Robustness & Cybersecurity

Article 15
Identity Pillar

Does your system validate inputs to prevent injection and manipulation attacks?

Prompt injection and data manipulation are top threats to agent systems.

Are rate limiting and circuit breakers in place to protect against abuse?

Without throttling, compromised agents can cause cascading failures.

Are security headers and encryption enforced on all agent communications?

All agent-to-agent and agent-to-service traffic should be encrypted in transit.

Do you conduct regular security audits of your agent infrastructure?

Periodic audits catch configuration drift and emerging vulnerabilities.

7

Post-Market Monitoring & Incident Response

Articles 72 & 73
Forensics Pillar

Can your system detect unusual or anomalous agent behavior automatically?

Anomaly detection is the first line of defense for post-deployment monitoring.

Does your incident response plan include AI agent-specific scenarios?

Generic IR plans often miss agent-specific failure modes like hallucination loops.

Can you forensically replay and reconstruct any agent incident after the fact?

Post-incident reconstruction is essential for root cause analysis and evidence.

Can you report serious incidents to authorities with tamper-proof evidence within 72 hours?

The Act requires timely incident reporting with supporting documentation.

Disclaimer: This self-assessment is provided as an informational resource to help organizations assess readiness for the EU AI Act. It does not constitute legal advice and is not a substitute for professional compliance counsel. The EU AI Act requirements may evolve as implementing measures and guidance are finalized. AI Identity helps meet these requirements but does not guarantee regulatory compliance.

Start securing your agents today

AI Identity automates agent identity, policy enforcement, compliance logging, and forensic capabilities — helping your team meet EU AI Act requirements with less manual effort.