Fudia Smartt

Fudia Smartt

Employment law partner at Spencer West

AI is finding its way into many aspects of employment and HR practices. Fudia Smartt, employment partner at Spencer West, explores the potential legal pitfalls.

Although AI has been around for decades, it has been over the last couple of years that its usage has become commonplace. This includes being used in all aspects of the employment life cycle.

For instance, the use of AI in recruitment has risen exponentially, from assessing applicants’ facial and vocal expressions to conducting interviews via chatbots and communicating with candidates through the use of textbots. But recruitment is not the only area: AI is being deployed in other HR decisions such as redundancies, performance dismissals, promotions and reward.

It is also being used for employee-engagement purposes, from collecting data on employee dissatisfaction to responding to HR-related queries. Given AI’s ability to process large amounts of data, it helps significantly reduce employers' recruitment costs.

However, at present the UK lacks any legislative framework on AI usage, which gives rise to potential legal issues.


Through the power of machine learning, AI can also learn to exhibit biases which are likely to fall foul of the Equality Act 2010 which prohibits discrimination based on protected characteristics such as sex, race, age and disability.

For instance, in 2018 Amazon scrapped the use of its secret automated recruitment system to evaluate applicant suitability. This was because - through assessing previous successful applicants - the AI tool became biased against women in the process.

This highlights the importance of employers scrutinising the outcomes from using AI to guard against biases creeping in, particularly where the AI is learning from previous institutional decisions. AI is learning from us, after all, and we all recognise that our decision-making can be flawed and biased.

Employers’ failure to monitor the use of AI for its discriminatory impact could result in potentially expensive claims being brought against employers, since the Equality Act 2010 makes no distinction between human and AI-assisted decision-making when it comes to liability.

Data Protection

The General Data Protection Regulation (GDPR) imposes restrictions on data collection and processing.

Under Article 22 of the UK GDPR, individuals are provided with the right to not be subject to a decision based solely on automated processing (ie decisions made without any human involvement).

This includes profiling to analyse or predict aspects concerning a person’s performance at work, reliability or behaviour, which produces “legal effects” on individuals. Producing “legal effects” for the purposes of this article includes decisions on whether someone is recruited, receives a bonus or is promoted.

Employers need to tread carefully if intending on using automated decision-making in their HR toolkit, because they need to ensure they are transparent with staff about its existence (and why it is being used), as well as explaining the likely consequences of its use for individuals.

Such information needs to be set out in an employer’s privacy notice. Further, employers need to provide privacy safeguards, which include allowing staff members the right to obtain “human intervention” to express their point of view and to contest the decision.

Common law

The use of AI could potentially run counter to certain key legal concepts, which have been developed over centuries. For instance, under English common law, certain terms are implied into every contract such as mutual trust and confidence.

Where employers rely solely on AI to make certain decisions, such as the provision of a bonus, it makes the position less clear on whether an employer can show it has acted reasonably. It also calls into question the intended personal nature of employment relationships (i.e., mutuality of obligation) where now employees are expected to work increasingly with AI as opposed to other people.

Where next?

There is no putting the genie back in the bottle when it comes to the use of AI. It is here to stay.

However, employers cannot simply abdicate responsibility for key employment-related decisions through using AI. Like any good tool, it needs to be used in conjunction with (as opposed to replacing) human decision-making.

We are still awaiting clarity from the government on how it intends to govern the use of AI. In March 2023, the Department for Science, Innovation and Technology (DSIT) published a white paper focused at providing a framework around AI development and use.

The approach is guided by five “non-statutory” principles: (i) safety; (ii) transparency and explainability; (iii) fairness; (iv) accountability, and (v) contestability. Instead of creating a single regulator, the government has said it will support existing regulators such as the Equality and Human Rights Commission and ICO to each develop a tailored approach for AI development and use within their respective sectors. We will have to wait until 2024 to see how this is expected to work in practice.

In the meantime, the TUC has also commissioned a comprehensive report, which has suggested changes to employment law such as amending the Employment Rights Act 1996 to provide workers with the right not to be subjected to detrimental treatment due to inaccurate data.

While we await greater legal clarity with bated breath, employers would do well to proceed with caution.


Spencer West

Connect with Fudia Smartt via LinkedIn