European Union Artificial Intelligence Act
Comprehensive regulatory framework for AI systems based on risk levels, establishing harmonized rules for development, placement on market, and use of AI systems in the EU.
Scope
All AI systems placed on the EU market or used in the EU, regardless of provider location
Key Compliance Requirements
1Risk Classification
AI systems must be classified into risk categories
- Unacceptable Risk: Prohibited systems (e.g., social scoring)
- High Risk: Strict requirements for critical applications
- Limited Risk: Transparency obligations
- Minimal Risk: No specific obligations beyond general law
2High-Risk System Requirements
Mandatory requirements for high-risk AI systems
- Risk management system implementation
- Data governance and quality standards
- Technical documentation and record keeping
- Transparency and information provision to users
- Human oversight measures
- Accuracy, robustness, and cybersecurity
3Transparency Obligations
Disclosure requirements for AI interactions
- Users must be informed when interacting with AI
- AI-generated content must be clearly labeled
- Deep fakes must be disclosed
- Emotion recognition and biometric systems require notification
4Governance and Compliance
Organizational and oversight requirements
- Conformity assessment procedures
- CE marking for high-risk systems
- Post-market monitoring and incident reporting
- Quality management system
- Registration in EU database for high-risk systems
5Documentation and Auditing
Record keeping and audit trail requirements
- Technical documentation maintenance
- Automatically generated logs
- Instructions for use documentation
- Regular system evaluations
- Compliance monitoring reports