Campaigners, trade unions and MPs are calling for stricter oversight of the use of artificial intelligence in the workplace, amid growing concerns about its effect on staff rights.

Campaigners, trade unions and MPs are calling for stricter oversight of the use of artificial intelligence in the workplace, amid growing concerns about its effect on staff rights.

The Trades Union Congress (TUC) is holding a half-day conference on Tuesday to highlight the challenges of ensuring that workers are treated fairly, as what it calls “management by algorithm” becomes increasingly prevalent.

“Making work more rewarding, making it more satisfying, and crucially making it safer and fairer: these are all the possibilities that AI offers us,” said Mary Towers, an employment lawyer who runs a TUC project on AI at work.

“But what we’re saying is, we’re at a really important juncture, where the technology is developing so rapidly, and what we have to ask ourselves is, what direction do we want that to take, and how can we ensure that everyone’s voice is heard?”

The TUC has highlighted the growing use of employee surveillance. The Royal Mail chief executive, Simon Thompson, recently conceded that some postal workers’ movements were being minutely tracked using handheld devices, the data from which was used for performance management, for example. However, speaking to MPs in February, Thompson blamed rogue managers for breaching the company’s policy.

Striking staff at Amazon’s Coventry warehouse have described a tough regime of ever changing targets that they believe are set by AI. Amazon says these performance goals are “regularly evaluated and built on benchmarks based on actual attainable employee performance history”.

An operations manager who had worked at several retail distribution centres told academics compiling a recent piece of TUC research: “At some point, warehouses will be expecting the efficiency of robots from humans.”

Matt Buckley, the chair of United Tech and Allied Workers, a branch of the Communication Workers union focusing on the sector, said his members had highlighted worries about being monitored at work.

“There’s really no regulation at all around employee surveillance as a concept at the moment; it’s really just up to companies,” he said. “Really, what we need is not a series of new laws, it’s a new body that can be flexible and iterative, and responsive to workers’ needs.”

But campaigners say some of the most alarming cases are those where judgments about workers’ behaviour are effectively made by algorithms, with little or no human oversight – including so-called “robo-firings”.

A group of UK-based Uber drivers recently successfully took the platform to the court of appeal in Amsterdam to force it to reveal details about how decisions had been made about them.

The company is considering whether to appeal against the case at the Dutch supreme court. A spokesperson said: “Uber maintains the position that these decisions were based on human review and not on automated decision-making.”

Cases such as this have relied on the EU’s General Data Protection Regulation (GDPR), which campaigners warn the UK government is poised to weaken in forthcoming legislation.

They argue that the data protection and digital information bill, due to have its second reading in the House of Commons on Monday, will make it easier for firms to turn down workers’ requests for data held about them, and loosen the requirement to have a human involved in decision-making.

Cansu Safak, of the campaign group Worker Info Exchange, which supported the Uber case, said: “We’re essentially trying to bridge the gaps in employment law by using the GDPR. The reason we’re using the GDPR is because these workers have no other recourse. They have no other avenues of redress.”

Adam Cantwell-Corn, of Connected by Data, which calls for more public involvement in the way AI is implemented, said: “Most people’s experience of GDPR is annoying pop-ups, but if we understand it in the context of increasing datafication and artificial intelligence in the workplace in particular, it’s got really important provisions that the bill is weakening.”

Labour’s deputy leader, Angela Rayner, who has the future of work in her portfolio, said: “The powerful potential of data analysis and artificial intelligence is already transforming our economy. Rights at work must keep pace with these changes so that risks can be managed and harm prevented, while benefits are felt by workers.

“Labour will update employment rights and protections so they are fit for the modern economy.”

Separately, the UK government published a white paper on AI last month that set out a series of principles for the use of the technology, including the need for fairness, transparency and “explainability”.

It suggested that existing regulators, including the Health and Safety Executive and the Equality and Human Rights Commission, could take on the responsibility of ensuring that these principles were followed.

But Cantwell-Corn dismissed this approach as “basically just a bunch of intentions with no firepower behind it”.

Even some Conservatives agree. The former cabinet minister David Davis, who has a long history of defending civil liberties, said: “The conventional regulatory approach will fail – because it will be civil servants thinking they know what’s going on, when they don’t.”

He called for a “rapid royal commission” on the best way of overseeing the technology, with the key principle being “if you use an AI, you are responsible for the consequences”.

The TUC is calling for a right to explainability – so that workers are able to understand how technology is being used to make decisions about them – and a statutory duty for employers to consult before new AI is introduced.

Read more:
Calls for stricter UK oversight of workplace AI amid fears for staff rights

By