Kennisbank
EU AI Act en Werving
Wat de regelgeving betekent voor werkgevers die AI gebruiken bij werving.
Waarom Werving Hoog-Risico Is
The EU AI Act (Regulation 2024/1689) classifies AI systems used in recruitment as high-risk under Article 6, Annex III, point 4(a). This covers any AI system used to screen, filter, evaluate, or rank candidates for employment.
High-risk classification triggers mandatory requirements for transparency, human oversight, data governance, accuracy, and robustness. Organizations that deploy non-compliant high-risk AI systems face fines of up to €35 million or 7% of global annual turnover.
Verboden Praktijken bij Werving (Artikel 5)
The following AI practices are banned outright in the EU, including in recruitment contexts:
- Emotion recognition in the workplace: AI systems that infer emotions from facial expressions, voice patterns, or body language during interviews or evaluations.
- Social scoring: AI systems that evaluate individuals based on social behavior or personality traits to determine employment suitability.
- Biometric categorization: AI systems that categorize individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, or sexual orientation.
- Subliminal manipulation: AI techniques that manipulate a person's decision-making without their awareness.
Verplichte Vereisten voor Hoog-Risico AI
Organizations using AI in recruitment must implement:
- Risk management system: Identify, analyze, and mitigate risks throughout the AI system's lifecycle.
- Data governance: Training and evaluation data must be relevant, representative, and free from errors. Processing must comply with GDPR.
- Technical documentation: Detailed documentation of the AI system's purpose, design, development, and deployment.
- Record-keeping: Automatic logging of system operations for traceability and auditability.
- Transparency: Clear information to deployers about how the AI system works, its capabilities, and its limitations.
- Human oversight: Mechanisms to allow human reviewers to understand, monitor, and override AI outputs.
- Accuracy and robustness: AI systems must perform consistently and be resilient to errors and adversarial inputs.
Belangrijke Deadlines
| Date | Milestone |
|---|---|
| February 2, 2025 | Prohibited practices take effect |
| August 2, 2025 | General-purpose AI rules apply |
| August 2, 2026 | High-risk AI requirements take effect (including recruitment AI) |
Hoe Omniteam de EU AI Act Vereisten Aanpakt
- Reproducible AI outputs: Deterministic parameters (temperature 0, fixed seed) ensure the same input always produces the same evaluation.
- Full audit trail: Every evaluation, score change, and user action is logged with timestamps and metadata.
- Bias detection: Automated scanning across 8 protected categories (age, gender, ethnicity, religion, family status, appearance, disability, sexual orientation).
- Human-in-the-loop: AI evaluations are shown alongside human ratings, never replacing them. Full override capability.
- No prohibited practices: No emotion recognition, no personality profiling, no social scoring, no biometric categorization.
- Transparency: Every AI score links to a specific transcript quote with an explanation of the reasoning.
Referenties
- Regulation (EU) 2024/1689 of the European Parliament and of the Council — Artificial Intelligence Act.
- European Commission. (2024). "AI Act: High-Risk AI Systems — Annex III."