Retour aux Ressources

10 Métriques de Recrutement que Chaque Équipe Devrait Suivre

Février 2026 8 min read
Key Takeaways: Tracking the right hiring metrics transforms recruitment from a subjective process into a data-driven function. These 10 metrics — from pipeline efficiency to evaluation quality — give you visibility into what's working and what needs fixing.

Why Hiring Metrics Matter

Most hiring teams operate on intuition: "I think we're hiring well" or "the process feels slow." Without metrics, you can't distinguish between a process that's actually effective and one that just feels familiar.

Data-driven hiring teams make better decisions because they can identify bottlenecks, measure quality, and demonstrate ROI. Here are the 10 metrics that matter most.

Pipeline Metrics

1. Time-to-Hire

Definition: The number of days from when a candidate enters your pipeline to when they accept an offer.

Why it matters: Long hiring cycles lose top candidates. LinkedIn research shows that top candidates are off the market within 10 days. The average time-to-hire across industries is 36 days (SHRM, 2023).

What to track: Overall average, per-role breakdown, and stage-by-stage duration to identify where candidates get stuck.

2. Time-to-First-Interview

Definition: Days from application received to first interview scheduled.

Why it matters: This is often the biggest bottleneck. Candidates who wait more than one week for a first interview are 2× more likely to drop out. AI screening can reduce this from days to hours.

3. Hiring Funnel Conversion

Definition: Conversion rates between each pipeline stage — applied → screened → interviewed → offered → hired.

Why it matters: Low conversion at a specific stage signals a problem. If 80% of candidates pass screening but only 10% pass interviews, your screening criteria may be too loose — or your interview bar may be miscalibrated.

Quality Metrics

4. Quality of Hire

Definition: A composite score measuring new hire performance, typically combining hiring manager satisfaction, 90-day performance review, and retention at 12 months.

Why it matters: This is the ultimate measure of hiring effectiveness. If your quality of hire improves after implementing structured hiring, your process is working.

Challenge: Quality of hire is a lagging indicator — you don't know the result for months. Use evaluation scores and interviewer calibration as leading indicators.

5. Interview Score Distribution

Definition: The distribution of candidate scores across your evaluation rubric (1–5 scale).

Why it matters: A healthy distribution shows differentiation. If 90% of candidates score 3.5–4.5, your rubric isn't differentiating — behavioral anchors may need tightening. If scores cluster at extremes, interviewers may not be using the middle of the scale.

6. Offer Acceptance Rate

Definition: Percentage of offers extended that are accepted.

Why it matters: A low acceptance rate (below 80%) suggests problems with compensation, candidate experience, or process speed. Track reasons for declines to identify patterns.

Consistency Metrics

7. Interviewer Calibration

Definition: The degree to which different interviewers give similar scores to the same candidate or comparable candidates.

Why it matters: High variance between interviewers means your hiring outcomes depend on who conducts the interview rather than how well the candidate performed. Evidence-linked evaluation helps because disagreements can be resolved by examining the evidence.

8. AI vs. Human Score Correlation

Definition: How closely AI evaluation scores align with human interviewer ratings.

Why it matters: This isn't about whether AI is "right" — it's about identifying where human reviewers may be inconsistent. A consistent offset (interviewer always 0.5 points higher) is fine. Low correlation suggests the interviewer may not be using the scoring rubric.

Compliance & Fairness Metrics

9. Adverse Impact Ratio

Definition: The selection rate for each protected group compared to the group with the highest selection rate. The four-fifths rule (80% threshold) is the standard benchmark.

Why it matters: Under the EU AI Act, high-risk AI systems in recruitment must monitor for bias. Tracking adverse impact continuously — not just at annual audits — is both a legal requirement and good practice.

10. Bias Flag Rate

Definition: The percentage of AI evaluations flagged for potential bias indicators across protected categories.

Why it matters: A high flag rate may indicate problems with the AI model, the job description, or the interview questions. A zero flag rate may indicate the detection system isn't sensitive enough. Track trends over time rather than absolute numbers.

Building a Hiring Dashboard

You don't need all 10 metrics from day one. Start with three:

  1. Time-to-hire — the efficiency baseline
  2. Interview score distribution — the quality indicator
  3. Hiring funnel conversion — the bottleneck finder

Add consistency and compliance metrics as your team and process mature. The goal isn't to track everything — it's to track what drives better decisions.

References

  • SHRM. (2023). "Average Time to Fill and Time to Hire Benchmarks."
  • LinkedIn Talent Solutions. (2023). "Global Talent Trends Report."
  • Schmidt, F.L., & Hunter, J.E. (1998). "The validity and utility of selection methods in personnel psychology." Psychological Bulletin.

Further Reading

La couche de preuve du recrutement.

Prêt à mettre en place le recrutement structuré ?

Démarrez votre essai gratuit et constatez la différence que le recrutement IA fait.