Is Your Recruitment AI High-Risk Under the EU AI Act?

If your recruitment agency uses AI to screen CVs or rank candidates, you are operating a high-risk AI system under the EU AI Act. Here’s what that means and what you need to do before 2 August 2026.

By Clausely Team

Why recruitment AI falls into the high-risk category

If your recruitment agency, HR team, or talent acquisition function uses AI tools to screen CVs, rank candidates, or generate shortlists — you are almost certainly operating a high-risk AI system under the EU AI Act.

This is not a matter of interpretation or grey area. Annex III of Regulation (EU) 2024/1689 explicitly lists as high-risk: AI systems intended to be used for recruitment or selection of natural persons, notably for advertising vacancies, screening or filtering applications, and evaluating candidates in the course of interviews or tests.

If that describes how you use ChatGPT, Claude, Microsoft Copilot, or any other AI tool in your recruitment workflow, you face a significantly more demanding set of compliance obligations than other businesses — and the enforcement deadline is 2 August 2026.

What makes recruitment AI high-risk?

The EU AI Act classifies AI systems as high-risk where their outputs could have significant effects on people’s lives — particularly in contexts where individuals are in a vulnerable position or have limited ability to challenge decisions made about them.

Recruitment sits squarely in this category. A candidate who is screened out by an AI system and never reaches a human recruiter may never know why. They cannot challenge a ranking they cannot see. They cannot correct a CV that was misread by a language model. The economic and personal consequences of being excluded from employment are significant and, in many cases, irreversible.

The Act therefore imposes heightened obligations on anyone who deploys AI in this context — obligations designed to ensure transparency, human oversight, and the ability to challenge AI-assisted decisions.

What are the specific obligations for high-risk AI deployers in recruitment?

If you use AI in recruitment and have any EU-facing activity — including candidates based in EU member states, client employers in the EU, or EU offices — you are subject to the following obligations as a deployer of a high-risk AI system.

  • Fundamental Rights Impact Assessment (Article 27). Before deploying a high-risk AI system, you must complete a written assessment of its impact on the fundamental rights of affected individuals. For recruitment AI, this means assessing the risk of discrimination, privacy violations, lack of transparency, unlawful automated decision-making, and the right to an effective remedy for affected candidates.
  • AI Literacy (Article 4). Staff who operate or rely upon AI-assisted screening and shortlisting must be trained to a sufficient level of AI literacy — understanding the capabilities, limitations, and risks of the systems they use, including the risk of bias.
  • Human Oversight (Article 26). You must implement and document meaningful human oversight of AI-generated outputs. This means qualified recruiters must review AI-generated shortlists before they are acted upon, and no candidate may be rejected solely on the basis of AI output. Oversight must be genuine — rubber-stamping AI recommendations without independent review does not satisfy this requirement.
  • Transparency to Candidates (Article 50). Candidates must be informed that AI tools are used in the recruitment process. This disclosure must be clear and must be provided before or at the point of application.
  • Incident Response. You must have a procedure for detecting and responding to AI incidents — including outputs that appear to be biased, inaccurate, or discriminatory, and data breaches involving candidate personal data.
  • Vendor Risk Management. You must maintain documentation of the AI tools you use, their risk classifications, your data processing agreements, and the controls you have in place to manage the risks they present.

The discrimination risk is real

AI tools used in recruitment carry a well-documented risk of producing biased outputs. Large language models trained on historical data may reflect historical patterns of discrimination. Proxy variables — such as educational institution, postcode, or language style — may correlate with protected characteristics in ways that are invisible to the recruiter but systematic in their effect.

Under the Equality Act 2010, indirect discrimination — a provision, criterion, or practice that puts persons with a protected characteristic at a particular disadvantage — is unlawful regardless of intent. An AI system that systematically disadvantages candidates from particular backgrounds, even without any deliberate design to do so, may expose your business to discrimination claims.

The EU AI Act’s human oversight requirements are designed in part to catch exactly this kind of bias before it results in discriminatory outcomes. Monthly audits of AI screening outputs — looking for patterns of underrepresentation or overrepresentation of particular demographic groups — are a minimum expectation for responsible deployment.

What happens if you get it wrong?

Non-compliance with the EU AI Act’s high-risk provisions can result in fines of up to €15 million or 3% of global annual turnover. For a recruitment agency with £2 million annual revenue, that is a potential fine of up to £520,000.

Beyond regulatory fines, the practical consequences include discrimination claims from affected candidates, regulatory investigations triggered by candidate complaints, reputational damage affecting client relationships, and loss of contracts with clients who require supplier AI governance compliance.

What documentation do you need?

To comply with the EU AI Act as a high-risk AI deployer in recruitment, you need:

That is nine documents. Produced by a law firm, this work typically costs between £4,000 and £8,000 and takes several weeks.

Clausely’s High-Risk Ready Pack generates all nine documents, tailored to your specific recruitment business — your AI tools, your EU exposure, your oversight arrangements, and your accountable person — in under 10 minutes.

  1. AI Acceptable Use Policy — governing which tools are approved and how they must be used
  2. AI Literacy Policy — ensuring recruiters understand the capabilities, limitations, and risks of the AI systems they use
  3. Article 50 Transparency Disclosures — candidate-facing disclosure of AI use
  4. Human Oversight SOP — documented procedures for recruiter review of AI outputs
  5. AI Incident Response Procedure — for detecting and responding to AI errors, bias events, and data incidents
  6. Vendor AI Risk Register — covering ChatGPT, Claude, Copilot, and any other tools used
  7. Fundamental Rights Impact Assessment — specific to your recruitment AI use case
  8. AI Risk Management Plan — identifying, assessing, and mitigating risks across the workflow
  9. Conformity Self-Assessment Checklist — article-by-article review of your compliance position

Recommended next step

Start the High-Risk Ready intake.

Recruitment AI is one of the clearest high-risk use cases under Annex III, so the next step is the High-Risk Ready pack with FRIA, risk management, and conformity documentation built in.

Get your High-Risk Ready Pack for £2,499

This article was written with AI assistance and reviewed for accuracy against current UK and EU regulatory guidance. It does not constitute legal advice. If you require specific legal guidance, please consult a qualified solicitor.