AI bosses are creating a new problem for gig workers

Publicly released:
Australia; NSW; VIC
Getty Images
Getty Images

Macquarie University research shows efforts to make AI management more transparent may be increasing mental strain for gig workers.  For millions of gig workers driving for companies such as Uber Eats, DoorDash and Deliveroo, there is no human manager to call, no supervisor to appeal to and no office to walk into. Decisions about pay, performance, penalties and access to work are made by algorithms. Increasingly, those algorithms are trying to explain themselves. This push towards ‘explainable AI’ is often promoted as a way to improve fairness and trust, but the researchers suggest explaining too much can backfire.

News release

From: Macquarie University

For millions of gig workers driving for companies such as Uber Eats, DoorDash and Deliveroo, there is no human manager to call, no supervisor to appeal to and no office to walk into. Decisions about pay, performance, penalties and access to work are made by algorithms.

Increasingly, those algorithms are trying to explain themselves. This push towards ‘explainable AI’ is often promoted as a way to improve fairness and trust. But new Macquarie University research suggests explaining too much can backfire.

A large experimental study involving more than 1,100 gig workers examined how different types of AI explanations affect workers’ acceptance of algorithmic decisions and their relationship with platforms. The research found transparency helps up to a point, but piling on layers of explanation can overwhelm workers, reduce trust and damage management relationships.

“We often assume transparency is a universal remedy for AI scepticism,” says Associate Professor Miles Yang from Macquarie Business School. “But when explanations are layered indiscriminately, you aren’t empowering workers. You're increasing their cognitive burden.”

Gig workers operate under constant time pressure, often juggling multiple apps and income streams, with little or no access to human managers when something goes wrong. In this context, AI systems don’t just support management – they are management.

The study looked at common explanation styles used by algorithmic systems. Some explanations are local, offering detailed, case-specific information such as exactly how late a delivery was. Others are counterfactual, describing hypothetical alternatives, such as what would have happened if a worker had taken a different action.

Individually, both types of explanation can be useful. The problem arises when platforms combine both at once.

“When workers are asked to analyse detailed performance data while simultaneously processing ‘what-if’ scenarios, the mental effort outweighs the benefit,” says study co-author Associate Professor Candy Ying Lu. “Instead of feeling informed, workers feel overwhelmed.”

The research shows acceptance of AI decisions plays a central role in shaping trust and perceptions of fairness. But acceptance is driven by whether explanations are cognitively manageable, not by the volume of information provided.

The findings have implications for Australia’s ongoing debates about gig work regulation and algorithmic management. While recent reforms focus on transparency and accountability, the research highlights a blind spot: AI systems can meet transparency requirements and still make work harder.

“If AI is going to act as a boss, it needs to communicate like a good one,” Associate Professor Lu says. “Clear, concise explanations matter more than raw data dumps.”

The researchers say explainable AI remains important, particularly where income and job security are affected, but explanation design must reflect how people actually process information under pressure.

Multimedia

Dr Miles Yang and Dr Candy Lu
Dr Miles Yang and Dr Candy Lu
Journal/
conference:
Journal of Management Studies
Research:Paper
Organisation/s: Macquarie University, Monash University
Funder: This work was supported by the UKRI Economic and Social Research Council [grant number ES/Z504713/1] as part of the ESRC Centre for Digital Futures at Work and also received funding from Macquarie University’s Research Acceleration Support Scheme [grant number 344619096].
Media Contact/s
Contact details are only visible to registered journalists.