Skip to content
Governance & Compliance

EU AI Act: HR AI Is High-Risk from August 2026

HR AI is high-risk under EU AI Act Annex III. Six mandatory obligations, deadlines, and how the Decision Layer meets each requirement.

Bert Gogolin
Bert Gogolin
CEO & Founder 6 min read

The EU AI Act places almost every AI system used in HR processes into the high-risk category. From August 2026, these systems must meet strict requirements for risk management, transparency, and human oversight.

At a Glance - HR AI Is High-Risk Under the EU AI Act

  • AI systems for recruiting, performance reviews, promotions, and terminations are classified as high-risk under Annex III, No. 4 of the EU AI Act.
  • Prohibited AI practices (social scoring, manipulation) have been in force since August 2025. Full high-risk obligations apply from August 2026 (the Digital Omnibus may defer this deadline to December 2027).
  • Six mandatory requirements: risk management, data governance, record-keeping, transparency, human oversight, and accuracy/robustness.
  • The Decision Layer maps each requirement architecturally: Confidence Routing for risk, versioned rule sets for data governance, audit trail for records, Human-in-the-Loop for oversight.
  • Penalties reach up to 35 million euros or 7% of global annual turnover for the most severe violations.

According to a PwC survey (2024), only 24% of enterprises using AI in HR processes have begun formal EU AI Act compliance preparation, despite the August 2026 deadline for high-risk systems.

EU AI Act ArticleRequirementDecision Layer Implementation
Art. 9Risk Management SystemConfidence Routing with configurable thresholds
Art. 10Data GovernanceVersioned rule sets with validity dates
Art. 12Record-KeepingImmutable audit trail per decision
Art. 13TransparencyAuditor Portal with full decision path
Art. 14Human OversightEnforced Human-in-the-Loop for defined types
Art. 15Accuracy and RobustnessBias monitoring and model-agnostic design

The Classification: HR AI is High-Risk

The EU AI Act classifies AI systems used in employment, worker management, and access to self-employment as high-risk (Annex III, No. 4). This specifically covers:

AI systems for recruiting and candidate selection. AI systems that influence promotion, termination, task assignment, or performance monitoring. AI systems that affect working conditions - including salary adjustments, classifications, and shift planning.

In short: almost every AI agent that prepares, supports, or makes decisions in HR processes falls under the high-risk category.

The Deadlines

Since August 2025, prohibitions on unacceptable AI practices have been in force - including social scoring and manipulative techniques. From August 2026 (the Digital Omnibus may defer this deadline to December 2027), the full obligations for high-risk AI systems take effect. The transition period is not generous: companies that are not building governance structures today will not be compliant in August 2026.

What Is Specifically Required - and How the Decision Layer Fulfils It

The following requirements apply to every operator of a high-risk AI system in the HR domain:

Article 9 - Risk Management System: The EU AI Act requires a continuous risk management system that identifies, assesses, and mitigates risks. In the Decision Layer, this is implemented through Confidence Routing: every agent decision is automatically evaluated by confidence and risk category. High risk or low confidence leads to escalation to a human. Thresholds are configurable and documented.

Article 10 - Data Governance: Versioned rule sets in the Decision Layer ensure that the data basis of every decision is traceable. Collective agreements, works council agreements (Betriebsvereinbarungen), and compliance rules have versions, validity dates, and scopes. During an audit, it is traceable which rule set in which version applied at the time of the decision.

Article 12 - Record-Keeping Obligations: The audit trail in the Decision Layer generates a complete, immutable data record for every decision: input, model, rule set, confidence, routing decision, result, timestamp. Automatically, not compiled after the fact.

Article 13 - Transparency: Every agent decision is traceable in the Auditor Portal. Works councils, data protection officers, and auditors can view the decision path. No black box.

Article 14 - Human Oversight: Human-in-the-Loop is an architectural principle in the Decision Layer, not an optional setting. For defined decision types - discrimination potential, co-determination topics, value thresholds - the architecture enforces human review. An agent cannot bypass this review.

Article 15 - Accuracy, Robustness, and Cybersecurity: Bias monitoring systematically checks for discriminatory patterns. Confidence thresholds ensure that the agent only decides autonomously with sufficient certainty. Model-agnostic design enables switching the language model without changing the governance logic.

Free eBook: AI in HR

EU AI Act checklist, Decision Framework, works council perspective, and readiness assessment - the governance handbook for HR leaders.

Download for free

What This Means for HR Departments

Companies that use or plan to use AI in HR processes today must build governance structures by August 2026. This specifically means:

Documented decision logic for every AI-supported HR process. Technically enforced Human-in-the-Loop mechanisms for decisions with personnel impact. Audit-proof audit trails that make traceable how every decision was made. Bias monitoring that detects and reports discriminatory patterns.

In Germany, the requirements of the Works Constitution Act (Betriebsverfassungsgesetz) add to this: works councils have a co-determination right for technical facilities that monitor the behaviour or performance of employees (§ 87(1) No. 6 BetrVG). AI agents in HR processes fall under this category.

The Decision Layer addresses both requirement blocks - EU AI Act and German co-determination law (Mitbestimmungsrecht) - in one architecture.

Decision Layer in detail

Co-determination and works council

HR Agent

Schedule a call - We’ll show you which of your HR processes fall under the high-risk category and how the Decision Layer meets the requirements.

Bert Gogolin

Bert Gogolin

CEO & Founder, Gosign

AI Governance Briefing

Enterprise AI, regulation, and infrastructure - once a month, directly from me.

No spam. Unsubscribe anytime. Privacy policy

EU AI Act HR High-Risk Decision Layer Governance Compliance
Share this article

Frequently Asked Questions

Do HR AI systems fall under the EU AI Act?

Yes. AI systems used in employment, worker management, and access to self-employment fall under Annex III No. 4 of the EU AI Act and are classified as high-risk AI systems.

When do the high-risk obligations take effect?

The obligations for high-risk AI systems under the EU AI Act apply from August 2026 (the Digital Omnibus may defer this deadline to December 2027). Prohibited AI practices have been in effect since August 2025.

What happens if a company fails to meet the obligations?

Fines of up to 35 million euros or 7% of global annual turnover. Additionally, reputational risks and potential liability claims.

How does the Decision Layer meet the AI Act requirements?

The Decision Layer maps the requirements architecturally: risk management through Confidence Routing, data governance through versioned rule sets, transparency through audit trail, human oversight through Human-in-the-Loop.

Which process should your first agent handle?

Leave your email - you'll receive your personal booking link instantly.