Every major applicant tracking system vendor is pitching AI-driven resume screening, interview scoring, or candidate ranking right now. For HR teams in Europe, that sales pitch now comes with a compliance question that most vendors haven't fully answered yet: does your tool qualify as high-risk AI under the EU AI Act?
The answer, in most cases, is yes.
Annex III of the EU AI Act lists the categories of AI systems that are automatically classified as high-risk. Number four on that list is explicit: AI systems used for recruitment and selection of natural persons, including for advertising vacancies, screening or filtering applications, and evaluating candidates in the course of interviews or tests. This is not a gray area. Recruitment AI is high-risk AI.
The implications are significant. High-risk AI systems are subject to requirements that go well beyond a standard product purchase. Under Article 9, operators and deployers must implement a risk management system throughout the system's lifecycle. Article 10 requires training data to meet quality criteria and be free of discriminatory bias. Article 13 mandates transparency and logging. Article 14 requires human oversight to be built into how the system is used.
For HR teams, this is not just a procurement question. It is an operational and legal responsibility that sits squarely on your desk.
One of the more important distinctions in the EU AI Act is between providers (those who build and sell the AI system) and deployers (those who use it in a professional context). Most HR teams using off-the-shelf recruitment AI are deployers.
Deployers are not off the hook under the EU AI Act. Article 26 sets out specific obligations for deployers of high-risk AI. You must ensure that staff who use the system receive adequate training. You must carry out a fundamental rights impact assessment before deploying certain systems. You must monitor the system for risks during use and report serious incidents to your national market surveillance authority.
This changes how HR professionals should approach vendor conversations. The question is no longer just what can this tool do. It is what documentation do you provide so we can fulfill our obligations as deployers.
The EU AI Act requirements on training data in Article 10 exist because the problem of biased AI hiring algorithms is documented and ongoing. Amazon famously scrapped its AI recruitment tool in 2018 after discovering it systematically penalized resumes that included the word womens. Similar issues have been documented with tools that use voice analysis, facial recognition, or language models trained on historical hiring data.
Join thousands of professionals mastering AI skills with interactive courses.
The challenge for HR teams is that bias in AI hiring tools is often invisible at the individual level. A system may appear to work perfectly case by case while systematically disadvantaging certain demographic groups at scale. This is precisely why the EU AI Act requires ongoing monitoring, data quality controls, and human oversight rather than a one-time compliance check.
If you are using a recruitment AI tool that ranks candidates, scores interviews, or filters resumes, you need to understand on what basis it makes those assessments. Vendors who cannot explain this in plain language are a liability, not an asset.
The EU AI Act entered into force in August 2024. The provisions on high-risk AI systems in Annex III begin applying in August 2026. That sounds like there is time, but the preparation required is substantial and does not happen overnight.
A fundamental rights impact assessment takes time to conduct properly. Updating vendor contracts to include the documentation and audit rights required by Article 26 involves legal review. Training HR staff on the obligations and safeguards around AI systems is a process, not a workshop. If your organization has not started this work, August 2026 is closer than it looks.
Organizations using high-risk recruitment AI without the required safeguards face enforcement actions from national authorities, fines, and the reputational risk that comes with a public finding of non-compliance.
The EU AI Act does not prohibit using AI for recruitment. It regulates how AI is used and what safeguards must be in place. A compliant HR automation setup has several characteristics.
There is documented human oversight. Every AI-assisted hiring decision has a named human who reviewed the output and made the final call. That person has received training on what the AI can and cannot reliably do.
There is a technical log. Article 12 requires high-risk AI systems to keep logs sufficient to trace decisions after the fact. If a candidate challenges a rejection, you need to be able to explain what role the AI played.
There is a vendor agreement that covers your deployer obligations. The vendor has provided a conformity declaration under Article 47 and documentation covering intended use, training data sources, known limitations, and how the system was validated.
There is a process for handling complaints. Candidates have rights under both GDPR and the AI Act when decisions are made about them using automated tools. Your HR process needs to accommodate those rights.
Recruitment AI sits at the intersection of employment law, data protection law, and now AI regulation. GDPR already imposes requirements on automated decision-making that affects individuals under Article 22. The EU AI Act adds a new layer on top. These frameworks do not always align neatly.
This is not a situation where HR can handle compliance in isolation or where legal can handle it without understanding how the tools actually work. It requires a sustained conversation between the people who use the tools day to day and the people who understand the regulatory obligations.
The organizations that will navigate this well are the ones starting those conversations now, before August 2026, before the enforcement actions begin, and before a candidate files a complaint that exposes gaps in your process.
AI hiring tools can make recruitment faster and, if built and deployed correctly, more consistent. But faster and more consistent are not the same as lawful. The EU AI Act is asking HR teams to hold both standards at the same time.