Base de conhecimento
Como funciona a deteção de viés de IA na contratação
Monitorização automatizada de discriminação no recrutamento com IA.
O que é a deteção de viés na contratação com IA?
Bias detection in AI hiring refers to automated systems that monitor AI evaluation outputs for patterns of discrimination against protected groups. The goal is to identify when an AI system's scoring may be influenced by characteristics that are legally irrelevant to job performance.
This is distinct from bias prevention (designing systems that minimize bias from the start) and bias correction (adjusting outputs after bias is found). Detection focuses on monitoring and flagging — the human reviewer decides what action to take.
As 8 categorias protegidas
Comprehensive bias detection should cover all categories protected under EU anti-discrimination law and the EU AI Act:
- Age — References to candidate age, generation, or years since graduation that could influence scoring.
- Gender — Language or patterns that favor or penalize candidates based on gender identity or expression.
- Ethnicity — Indicators of racial or ethnic bias in evaluation language or score distribution.
- Religion — References to religious practices, holidays, or cultural markers that should not affect evaluation.
- Family status — Penalization based on parental status, marital status, pregnancy, or caregiving responsibilities.
- Appearance — Weight, height, attractiveness, or dress-related factors that are irrelevant to job performance.
- Disability — Bias against candidates with disclosed or perceived physical or cognitive disabilities.
- Sexual orientation — Discrimination based on sexual orientation or related personal information.
Como funciona a deteção automatizada
A typical bias detection pipeline in AI hiring includes:
- Term scanning: Evaluation text is checked against a list of protected terms and phrases across all 8 categories. Each term is classified by severity (critical, high, medium).
- Pattern analysis: Score distributions are monitored across demographic groups to detect adverse impact — when a protected group receives significantly lower scores than the majority group.
- Flagging: When a potential bias indicator is detected, the evaluation is flagged for human review with a description of the concern and the relevant evidence.
- Human review: A human reviewer examines the flagged evaluation and decides whether the flag represents genuine bias or a false positive.
Por que a revisão humana é essencial
Automated bias detection can identify potential issues, but it cannot determine intent or context. A reference to "family" in an evaluation might be a bias indicator — or it might be the candidate discussing their career motivation. Only a human reviewer can make this distinction.
This is why responsible AI hiring systems flag evaluations for human review rather than silently changing or suppressing AI outputs. Automatic correction would hide the problem rather than addressing it, and could itself introduce new forms of bias.
Impacto adverso e a regra dos quatro quintos
The four-fifths rule (or 80% rule) from the US Uniform Guidelines on Employee Selection Procedures provides a practical threshold: if the selection rate for a protected group is less than 80% of the rate for the group with the highest selection rate, adverse impact may exist.
While this originated in US employment law, the principle of monitoring for disparate outcomes across groups is increasingly expected under EU regulations as well. AI systems in hiring should track these metrics continuously, not just at point-in-time audits.
Contexto regulatório
Under the EU AI Act, high-risk AI systems in recruitment must implement risk management that includes bias monitoring. Article 9 requires providers to "identify and implement measures to address risks of bias" and Article 15 requires accuracy that is "consistent across groups."