Retour aux Ressources

EU AI Act et Recrutement : Ce que les Employeurs Doivent Savoir

Février 2026 11 min read

Key Takeaways

  • AI systems used in recruitment are classified as high-risk under the EU AI Act (Annex III, Category 4)
  • Emotion recognition in hiring is now illegal (Article 5, effective Feb 2025)
  • High-risk requirements (transparency, human oversight, bias monitoring) take full effect 2 August 2026
  • Structured hiring with documented competencies and auditable AI scoring is inherently aligned with compliance

On 1 August 2024, Regulation (EU) 2024/1689 — the EU Artificial Intelligence Act — entered into force, making the European Union the first major jurisdiction to enact comprehensive legislation governing artificial intelligence. For organizations that use AI in hiring, the implications are significant and immediate: AI systems used in recruitment and employment are explicitly classified as high-risk, subject to binding requirements on transparency, data governance, human oversight, and bias monitoring.

This article provides a practical guide to what the Act requires, what it prohibits, and what employers must do to comply — whether they operate in the EU, hire EU-based candidates, or simply want to future-proof their hiring processes against the regulatory direction that the rest of the world is likely to follow.

Why Recruitment AI Is Classified as High-Risk

Annex III of the AI Act lists specific use cases that qualify as high-risk. Category 4 covers “Employment, workers management and access to self-employment” and includes:

  • AI systems used to place job advertisements, screen or filter applications, and evaluate candidates in recruitment processes
  • AI systems used for making decisions affecting terms of work-related relationships, including promotion, termination, and task allocation
  • AI systems used for monitoring and evaluating the performance and behavior of workers

The rationale is straightforward: hiring decisions have a profound impact on individuals’ livelihoods and life trajectories. Errors — whether caused by bias, opacity, or poor data governance — produce harms that are difficult to reverse and disproportionately affect vulnerable populations.

Article 5: Prohibited Practices in Recruitment

Before addressing the high-risk requirements, employers should understand what the Act prohibits entirely. Article 5 bans AI practices considered an “unacceptable risk,” several of which are directly relevant to hiring:

  • Social scoring (Article 5(1)(c)) — AI systems that evaluate individuals based on social behavior or personality characteristics in ways that lead to detrimental treatment unrelated to the context. In recruitment, this prohibits using AI to infer personality traits from social media activity, writing style, or non-work behavior and using those inferences to filter candidates.
  • Emotion recognition in the workplace (Article 5(1)(f)) — AI systems that infer emotions of employees or candidates in workplace contexts are prohibited, except for medical or safety purposes. This means that AI tools claiming to assess “enthusiasm,” “confidence,” or “cultural fit” through facial expression analysis, voice tone analysis, or body language scoring are illegal under the EU AI Act.
  • Biometric categorization by protected characteristics (Article 5(1)(g)) — AI systems that categorize individuals by race, political opinion, trade union membership, religious beliefs, sex life, or sexual orientation are prohibited.

Already in Effect

These prohibitions took effect on 2 February 2025. Organizations still using emotion-detection or personality-inference tools in hiring should discontinue them immediately.

Requirements for High-Risk Recruitment AI

For AI systems that are not prohibited but fall into the high-risk category (which includes virtually all AI-assisted hiring tools), Articles 6 through 15 impose a set of mandatory requirements. Here is what they mean in practice:

1. Risk Management System (Article 9)

Organizations must establish, implement, and maintain a risk management system throughout the AI system’s lifecycle. This includes:

  • Identifying and analyzing known and reasonably foreseeable risks
  • Estimating risks based on the intended purpose and conditions of reasonably foreseeable misuse
  • Implementing risk mitigation measures and testing their effectiveness

For hiring AI, this means documenting what biases the system might produce, how they are detected, and what corrective measures are in place.

2. Data Governance (Article 10)

Training, validation, and testing datasets must meet specific quality criteria:

  • Data must be relevant, representative, and as free of errors as possible
  • Datasets must consider the specific geographical, contextual, behavioral, or functional setting in which the AI will be used
  • Where processing of special categories of personal data (Article 9(1) GDPR) is strictly necessary for bias detection and correction, it may be permitted under specific safeguards

In practice, this means that AI hiring tools must be trained on data that reflects the actual candidate population, and that organizations must be able to demonstrate that their AI does not systematically disadvantage protected groups.

3. Technical Documentation (Article 11)

Before a high-risk AI system is placed on the market or put into service, technical documentation must be drawn up and kept up to date. This documentation must demonstrate compliance with all requirements and provide authorities and downstream users with the information needed to assess compliance.

4. Transparency and Information to Users (Article 13)

High-risk AI systems must be designed and developed to ensure that their operation is sufficiently transparent to enable users to interpret and use the output appropriately. Users (in this case, employers and hiring managers) must receive clear information about:

  • The capabilities and limitations of the AI system
  • The degree of accuracy, robustness, and cybersecurity the system achieves
  • Any known or foreseeable circumstances that may lead to risks
  • The human oversight measures needed

For hiring, this means that AI scoring must be explainable. An AI that produces a candidate score without any indication of how it was derived fails this requirement.

5. Human Oversight (Article 14)

This is one of the most consequential requirements for recruitment AI. High-risk systems must be designed to allow effective oversight by natural persons, including:

  • The ability to fully understand the capabilities and limitations of the AI system
  • The ability to correctly interpret the AI system’s output
  • The ability to decide not to use the AI system’s output or to override it
  • The ability to intervene or interrupt the system’s operation

In hiring terms: a human must always be able to override an AI recommendation, and must have sufficient information to do so meaningfully. “Human in the loop” cannot be a formality; it must be substantive.

6. Accuracy, Robustness, and Cybersecurity (Article 15)

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity. For hiring AI, this includes resilience against attempts to manipulate inputs (e.g., resume keyword stuffing designed to game AI screening) and protection of candidate data.

Candidate Rights Under the AI Act and GDPR

The AI Act operates alongside the General Data Protection Regulation (GDPR), which already provides candidates with significant rights:

  • Right to explanation (GDPR Article 22) — Candidates subject to automated decision-making have the right to obtain meaningful information about the logic involved.
  • Right to contest (GDPR Article 22) — Candidates can challenge automated decisions and request human review.
  • Right to non-discrimination — Both the AI Act and GDPR prohibit discriminatory automated processing.

The AI Act adds a new layer: Article 86 guarantees a right to explanation for individual decisions made by high-risk AI systems that produce legal effects or similarly significant effects on individuals. A hiring rejection based on AI-assisted screening unambiguously falls into this category.

Timeline: When Must Employers Comply?

DateMilestone
1 August 2024AI Act enters into force
2 February 2025Prohibited practices (Article 5) take effect
2 August 2025Requirements for general-purpose AI models take effect
2 August 2026High-risk AI system requirements take full effect — including all recruitment AI
EU AI Act Compliance Timeline for Recruitment
Aug 2024Effective

AI Act enters into force

Feb 2025Effective

Prohibited practices take effect

Emotion detection, social scoring banned

Aug 2025Effective

General-purpose AI rules apply

Aug 2026Deadline

High-risk AI requirements

All recruitment AI must comply

Organizations have until 2 August 2026 to bring their recruitment AI into compliance with the high-risk requirements. Given the scope of the obligations — documentation, risk management, bias testing, human oversight mechanisms — starting now is not premature.

What Employers Should Do Now

  1. Audit your current AI tools. Identify every AI system used in your hiring process: resume screening, candidate scoring, chatbot pre-screening, interview evaluation, video analysis. Determine which fall into the high-risk category (most will).
  2. Eliminate prohibited practices. If any tool infers emotions, personality traits, or protected characteristics, discontinue it immediately. These practices are already illegal.
  3. Demand transparency from vendors. Ask your AI hiring vendors for technical documentation, bias audit results, and compliance roadmaps. If they cannot provide these, they are unlikely to be compliant by August 2026.
  4. Implement human oversight. Ensure that every AI-generated recommendation in your hiring process can be reviewed, overridden, and explained by a human decision-maker. Automate structure, not decisions.
  5. Document everything. Build an audit trail: what AI was used, what data it was trained on, what decisions it influenced, and what human review was applied. This documentation is not optional — it is a legal requirement.
  6. Adopt structured hiring. Structured hiring with documented competencies, standardized rubrics, and evidence-linked scoring is inherently aligned with the AI Act’s requirements. The Act does not prohibit AI in hiring — it requires that AI in hiring be transparent, fair, and auditable. Structured processes are the foundation for meeting those requirements.

How Structured Hiring Aligns with the EU AI Act

Why Structured Hiring = Compliance

Organizations that implement structured, evidence-based hiring are well-positioned for compliance because the methodology itself addresses the Act’s core requirements:

  • Transparency — Every evaluation is based on documented competencies with clear behavioral indicators. AI scoring is evidence-linked, not opaque.
  • Human oversight — Human interviewers conduct the evaluation; AI provides structure, consistency, and documentation. The human is substantively in the loop.
  • Bias monitoring — Standardized evaluation criteria reduce the influence of irrelevant demographic factors. Structured frameworks reduce both bias and noise.
  • Auditability — Every score, every piece of evidence, and every decision point is documented and reviewable.
  • Candidate rights — Candidates can receive a clear explanation of how they were evaluated — which competencies were assessed, what evidence was cited, and how scores were aggregated.

The Bottom Line

The EU AI Act does not prohibit AI in recruitment. It requires that AI in recruitment be transparent, fair, documented, and subject to meaningful human oversight. Organizations that have relied on opaque AI tools — or worse, on unstructured processes with no documentation at all — face the most significant compliance burden.

The organizations best prepared are those that have already adopted structured, evidence-based hiring: clear competencies, standardized evaluation, documented evidence, and human decision-makers supported (not replaced) by AI. The regulatory trajectory is unmistakable — and it aligns precisely with what the research has shown works best.

References

  • European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union, L series.
  • European Parliament and Council of the European Union. (2016). Regulation (EU) 2016/679 on the protection of natural persons with regard to the processing of personal data (General Data Protection Regulation). Official Journal of the European Union, L 119.
  • European Commission. (2021). Proposal for a Regulation laying down harmonised rules on artificial intelligence. COM(2021) 206 final.
  • Veale, M., & Borgesius, F. Z. (2021). Demystifying the Draft EU Artificial Intelligence Act. Computer Law & Security Review, 41, 105573.
  • Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481.
La couche de preuve du recrutement.

Prêt à mettre en place le recrutement structuré ?

Démarrez votre essai gratuit et constatez la différence que le recrutement IA fait.