Base de connaissances

Règlement européen sur l'IA et recrutement

Ce que la réglementation signifie pour les employeurs utilisant l'IA dans le recrutement.

Pourquoi le recrutement est à haut risque

The EU AI Act (Regulation 2024/1689) classifies AI systems used in recruitment as high-risk under Article 6, Annex III, point 4(a). This covers any AI system used to screen, filter, evaluate, or rank candidates for employment.

High-risk classification triggers mandatory requirements for transparency, human oversight, data governance, accuracy, and robustness. Organizations that deploy non-compliant high-risk AI systems face fines of up to €35 million or 7% of global annual turnover.

Pratiques interdites dans le recrutement (Article 5)

The following AI practices are banned outright in the EU, including in recruitment contexts:

  • Emotion recognition in the workplace: AI systems that infer emotions from facial expressions, voice patterns, or body language during interviews or evaluations.
  • Social scoring: AI systems that evaluate individuals based on social behavior or personality traits to determine employment suitability.
  • Biometric categorization: AI systems that categorize individuals based on biometric data to infer race, political opinions, trade union membership, religious beliefs, or sexual orientation.
  • Subliminal manipulation: AI techniques that manipulate a person's decision-making without their awareness.

Exigences obligatoires pour l'IA à haut risque

Organizations using AI in recruitment must implement:

  1. Risk management system: Identify, analyze, and mitigate risks throughout the AI system's lifecycle.
  2. Data governance: Training and evaluation data must be relevant, representative, and free from errors. Processing must comply with GDPR.
  3. Technical documentation: Detailed documentation of the AI system's purpose, design, development, and deployment.
  4. Record-keeping: Automatic logging of system operations for traceability and auditability.
  5. Transparency: Clear information to deployers about how the AI system works, its capabilities, and its limitations.
  6. Human oversight: Mechanisms to allow human reviewers to understand, monitor, and override AI outputs.
  7. Accuracy and robustness: AI systems must perform consistently and be resilient to errors and adversarial inputs.

Échéances clés

DateMilestone
February 2, 2025Prohibited practices take effect
August 2, 2025General-purpose AI rules apply
August 2, 2026High-risk AI requirements take effect (including recruitment AI)

Comment Omniteam répond aux exigences du règlement IA européen

  • Reproducible AI outputs: Deterministic parameters (temperature 0, fixed seed) ensure the same input always produces the same evaluation.
  • Full audit trail: Every evaluation, score change, and user action is logged with timestamps and metadata.
  • Bias detection: Automated scanning across 8 protected categories (age, gender, ethnicity, religion, family status, appearance, disability, sexual orientation).
  • Human-in-the-loop: AI evaluations are shown alongside human ratings, never replacing them. Full override capability.
  • No prohibited practices: No emotion recognition, no personality profiling, no social scoring, no biometric categorization.
  • Transparency: Every AI score links to a specific transcript quote with an explanation of the reasoning.

Références

  • Regulation (EU) 2024/1689 of the European Parliament and of the Council — Artificial Intelligence Act.
  • European Commission. (2024). "AI Act: High-Risk AI Systems — Annex III."