Kiattisak - stock.adobe.com
AI disempowers logistics workers while intensifying their work
Conversations on the algorithmic management of work largely revolve around unproven claims about productivity gains or job losses - less attention is paid to how AI and automation negatively affect low-paid workers
Marc Francis spent the past few years of his life governed by a broken route planner. Time after time, the driver who until recently delivered for Parcelforce would be asked to go on harder and harder routes or given unfeasible routes on his shifts.
And when the system broke, failed to bring up an address or created a route that meant he missed the short time window he had to deliver in, any failure to deliver would be docked from his own pay as an independent contractor. Other errors – he claimed the system was “riddled with faults” – meant their automated pay system would wrongly dock his wages.
Francis, who is now a leading plaintiff in an Uber-style case against Parcelforce’s classification of its drivers as self-employed, told Computer Weekly in February that the company’s use of automation had, if anything, made sure the job delivered “the worst exploitation in my life”.
His experience is the norm of modern work for many workers. A lot of the conversation around the use of algorithmic management, automation and AI in the workplace usually focuses on work replacement or efficiency – from think pieces prognosticating on if AI or automation generally will replace people’s jobs in future to the potential for AI to hike productivity.
But for all those many future-focused pieces, there’s markedly less discussion of the actual impacts this existing technology is already having on the ground for, largely, low-paid workers.
Automating disempowerment
In June, a report from the Global Partnership on Artificial Intelligence (GPAI) tried to answer just that. By interviewing Amazon managers and frontline workers, the researchers were able to draw out that the company’s cutting edge use of AI and automation in the workplace had already had huge effects on the workforce. In particular, it had undermined pay and conditions, by setting undeliverable targets for staff, had harvested data from workers without their full knowledge or consent, and made the human workforce more expendable.
“At companies like Amazon, systems generate data that is specifically used to set people’s performance targets, typically gathered from things like workers scanners as they move around the warehouse,” explains Martha Dark, founder and director of Foxglove, a legal non-profit that specialises in representing workers in disputes with tech giants. “The targets that they’re then given incentivise extremely high work rates that are often frankly impossible to meet or cause serious harm if people try to reach those targets.”
Adrienne Williams, a former Amazon driver in the US and now a research fellow at the Distributed AI Research Institute, said: “When I was an Amazon driver, we would tell new people to slow down because they were going to screw themselves in a week or two.
“If I was able to deliver, let’s say, 300 packages in eight hours on Tuesday, then the expectation was that I was going to deliver 310 on Wednesday, and then 315 on Thursday. And there’s no shut off valve to say, ‘This is the max’.”
Martha Dark, Foxglove
But the GPAI report went further than analysing the material changes to work and looked into the less obvious changes to the workplace. Beyond the effect on pay and conditions, the use of AI in the workplace, particularly the scale of procedures forced onto workers and the scale of surveillance used, had removed agency from the workforce.
The presumed objectivity that comes with automated decision-making also makes it close to impossible for workers to challenge decisions that go against them, something Computer Weekly has covered in the past.
When she worked at Amazon, Williams and her colleagues had no way to challenge or rectify clear errors in the route schedules, demands or GPS systems used by the company. Sometimes, clearly unsafe routes for trucks would go uncertified despite countless complaints as there was no clear way to challenge the automated system or reach those senior enough to change it.
Craig Gent, a researcher, writer and editor based in Leeds, and the author of Cyberboss: The rise of algorithmic management and the new struggle for control at work, says: “One of the things that companies that make algorithmic technologies like to sell them on is the objectivity of the data and the decisions that they produce that is supposed to strip out any sort of politics or contestation in favour of sort of nice, streamlined data-driven decision.
“In actuality, what happens is that it bestows an enormous amount of power to algorithms and their decisions, in such a way that is completely disempowering to workers and has an effect on managers who are no more wise to the inner workings of these systems than workers are.”
Dealing with AI harms
You don’t have to look far to find a score of scandals around automated decision-making, from Uber drivers being auto-fired by “racist” facial-identification software to software designed to spot benefit fraud wrongly cutting people off from their only financial lifeline, where the affected individual’s ability to contest the decision is severely limited, if not non-existent.
This applies even to something much more comparatively simple technologically and more flagrantly broken – such as the Horizon software used by the Post Office, which led to a nearly 20-year battle for affected sub-postmasters trying to convince those in power that they had been mistreated.
“It’s coming out without proper checks being done to ensure the software works, functions and doesn’t cause harm, but instead it’s being rolled out far too soon without proper concern for worker health and safety,” says Dark.
One report in the US found that the workplace injury rates for certain Amazon fulfilment centres in the US were three times the country’s average for warehouses.
But despite that, according to several of those Computer Weekly spoke to, the last UK government’s attempts to get to grips with this technology, particularly AI, has been largely focused on industrial self-regulation.
From improving transparency and explainability of black-box algorithms to worker monitor statements that allow staff and observers to know what surveillance and productivity software is being used, what data is being collected and if it relies on automated decision-making, there are policies that could help ameliorate the impact.
It’s not just government that has struggled to get to grips. Williams points out that trade unions have made little headway on shaping the actual ways in which AI and algorithms are used in these workplaces.
She cites that while unions in the US have fought against the use of cameras in trucks monitoring delivery drivers, there has been little to no discussion about the use of outside-facing cameras on trucks. Amazon, for example, uses internal and external cameras from Netradyne to monitor its drivers and film their routes. Netradyne was also recently invested in by Hyundai. The data it collects from its huge range of dash-cams could be used in future autonomous-driving and driver-assist systems, by creating better digital maps for those systems to use.
“I call them zombie trainers – someone who is training your AI systems that doesn’t know they’re doing it,” says Williams. “You’re doing a second hidden job, and you don’t realise it.”
Flattening human labour
That potential use of drivers as data harvesters hints at the fact that the future of much of this low-paid labour could be in the basic work of maintaining and training AI computational models or AI.
“Labour costs are much lower in the Philippines and in India,” explains Carl-Benedikt Frey, an associate professor of AI and work at the Oxford Internet Institute. “So, if generative AI reduces productivity differentials between people, it will give them the opportunity to tap into cheap labour in other places.”
It’s a process that’s already begun to happen for “clickworkers”, a nickname for the huge swathe of low-paid workers – from content moderators or autonomous vehicle trainers to microworkers who spend their time answering surveys – who train the algorithms and produce the data that powers AI systems. Frequently, those workers earn less than the minimum wage – some big firms pay average of $2 an hour – or sometimes are paid in gift cards rather than cash.
Adrienne Williams, Distributed AI Research Institute
And that point underlines something mentioned time and again by those Computer Weekly spoke to for this article: that the impact of AI and automation on workplaces is more about how it changes and flattens human labour, rather than replaces it; about how it turns workers themselves into robots rather than replace them with robots, as one interviewee explained it.
“It’s about optimising work,” Gent says. “And that, from the employers’ perspective, means reducing the uncertainty that workers, by virtue of being human, introduce into what might otherwise be a finely calibrated calculus of making money.”
Computer Weekly contacted Parcelforce for comment but received no on-the-record reply.
In response to the GPAI report, an Amazon spokesperson told Computer Weekly that the company aims to create both the safest and most technologically advanced workplaces on Earth, and that any technology it designs is intended to create a better work environment as well as augment their capabilities, rather than replace them.
“In our fulfilment and logistics operations, we use software and hardware to automate the most difficult and repetitive tasks, reducing mental and physical stress for employees – and means we have 50% fewer injuries than other retail and logistics businesses in the UK,” they said.
“The use of state-of-the art robotics has cut down on walking time in our fulfilment centres, increased operational efficiency, while also creating a need for more skilled jobs such as engineers to operate and maintain the advancements.
“We listen to, and regularly act on feedback and suggestions from our employees, and our open-door policy encourages them to bring their comments, questions and concerns either directly or anonymously.”
On the use of technology to coordinate driver routes and deliveries, the spokesperson added that several factors shape their on-road experience to make their work feel achievable and rewarding.
“Amazon continues to invest in route design and technology that accurately accounts for the complexities drivers face on the road, like the type of delivery location, walking distance, getting in and out of the vehicle, the size and weight of packages, and environmental factors like weather,” they said. “Driver feedback is at the heart of our continuous improvement mindset as we build safe, simple and sustainable routes.”
Read more about AI and automation
- AI interview: Krystal Kauffman, lead organiser, Turkopticon: Remote Mechanical Turk workers are responsible for training artificial intelligence algorithms and completing other data-related business processes – we hear about the workplace issues they face.
- TUC publishes legislative proposal to protect workers from AI: Proposed bill for regulating artificial intelligence in the UK seeks to translate well-meaning principles and values into concrete rights and obligations that protect workers from systems that make ‘high-risk’ decisions about them.
- Creative workers say livelihoods threatened by generative AI: Computer Weekly speaks with various creative workers about the impact generative artificial intelligence systems are having on their work and livelihoods.