AI bias in hiring is a documented problem. In 2014, Amazon built an AI tool to screen CVs...

Amazon is one of the most technically sophisticated companies on earth. The problem wasn't incompetence. The problem is structural: AI learns from data, and data reflects the world as it was — not as it should be.

The legal reality

The Equality Act 2010 applies to algorithmic decisions exactly as it applies to human ones. If an AI tool produces outcomes that discriminate against people with protected characteristics — intentionally or not — your business is exposed. The mechanism of the discrimination doesn't change the liability.

How bias gets into AI systems

The mechanism is straightforward, which makes it all the more difficult to eliminate. An AI hiring tool trained on historical data doesn't just learn what good candidates look like — it learns what good candidates looked like in the past. If past hiring decisions were influenced by unconscious bias, the AI learns those biases. If certain groups were systematically underrepresented in senior roles, the AI learns to treat that underrepresentation as normal.

The AI isn't "trying" to discriminate. It's doing exactly what it was designed to do: find patterns. The patterns it finds are the patterns that exist in the training data. And historical hiring, lending, and customer service data contains historical biases — often severe ones — that the AI then amplifies at the scale and speed that only automation can achieve.

This is what makes AI bias so dangerous for businesses. It's not obviously visible. It doesn't announce itself. The AI makes thousands of decisions — all looking technically correct, all following the same pattern — and the discriminatory effect only becomes apparent in aggregate, often only after someone raises a formal complaint.

The UK legal exposure

The Equality Act 2010 prohibits discrimination on the basis of protected characteristics: age, race, sex, disability, religion or belief, sexual orientation, gender reassignment, pregnancy and maternity, and marriage or civil partnership. It applies to employment decisions — hiring, promotion, redundancy — and also to service provision, lending, and customer treatment.

The Act doesn't contain an exemption for decisions made by algorithms. An AI tool that produces discriminatory outcomes creates the same legal exposure as a manager who makes the same discriminatory decision manually — in some ways more, because the AI applies that pattern consistently and at scale. Additionally, the EU AI Act explicitly classifies AI systems used in recruitment and employment decisions as high-risk, requiring specific governance, audit trails, and human oversight — a framework UK businesses selling into the EU or working with EU partners will need to take seriously.

Four AI bias exposure areas

Hiring & HR
CV screening, promotion decisions, performance scoring, redundancy selection
Lending & credit
Automated credit decisions that reflect historically discriminatory lending patterns
Customer profiling
Differential pricing, service quality, or eligibility criteria based on biased models
Protected characteristics
Age, race, gender, disability — all protected under the Equality Act 2010

"The algorithm decided" makes it worse

There's a tempting assumption that automating a decision removes human accountability. The ICO has been explicit: it does not. Under UK GDPR Article 22, individuals have rights in relation to solely automated decisions that significantly affect them, including the right to a human review and the right to an explanation. If your AI makes a consequential decision — rejecting a job application, declining a loan, classifying a customer — and there is no human review process, no explanation, and no appeals route, you may be in breach of multiple overlapping legal obligations.

Worse, "the algorithm decided" as a defence in a discrimination claim actually concentrates accountability on the business rather than distributing it. It demonstrates that a system was in place — your system — that produced discriminatory outcomes, without adequate human oversight to catch them. That's harder to defend, not easier.

"The AI makes thousands of decisions — all looking technically correct — and the discriminatory effect only becomes apparent when someone raises a formal complaint."
Colleagues reviewing data and decisions together

Algorithmic bias isn't always obvious. It can persist for months before a pattern becomes visible — often only after a discrimination complaint.

Detect, Assess, Defend

Managing AI bias risk — Detect, Assess, Defend
Detect
Bias testing on AI decisions
Statistical analysis of outcomes by group
Outcome disparity analysis
Compare pass rates across protected characteristics
Complaint pattern review
Are complaints clustering around any group?
Assess
Which decisions are AI-assisted?
Map every consequential AI touchpoint
Affected protected characteristics?
Which groups could be disadvantaged?
EU AI Act high-risk classification?
HR and employment AI is explicitly high-risk
Defend
Human oversight for consequential decisions
No AI-only final decision on protected groups
Bias audits
Regular independent fairness testing
Equality impact assessment for AI
Before deployment and at each update
Right to human review processes
Documented appeals route for affected individuals

How BBS helps with this

  • EU AI Act Compliance Assessment — We classify your AI tools against the EU AI Act risk framework, identifying which systems meet the high-risk threshold and what governance obligations that creates for your business.
  • AI Governance & Policy Drafting — We establish human oversight mechanisms and documented audit trail requirements for any AI-assisted hiring or decision-making process, giving you a defensible position before a complaint arises.
  • AI Security Gap Assessment — We evaluate your AI systems for fairness testing gaps and bias exposure, including outcome disparity analysis across relevant protected characteristics.
  • Staff Awareness Training — We train managers who use AI in HR processes on their Equality Act obligations, what algorithmic bias looks like in practice, and when human judgement must override an AI recommendation.