News release
From:
Warning about rapid AI changes in the workplace
New research from Flinders University highlights the need for artificial intelligence (AI) systems to complement – not impede – worker safety and welfare in workplaces.
While Australia has taken commendable first steps towards responsible governance of AI, its current regulatory apparatus lacks the legally binding and workplace-specific and mechanisms necessary to mitigate emerging risks, according to Future of Work expert Associate Professor Andreas Cebulla.
“The goal is not to eliminate AI's role, but to co-produce a workplace that reflects operational accountability,” he says.
In a new article in the Journal of Industrial Relations, Associate Professor Cebulla says that while early assessments of AI have focused on job automation and productivity gains, a growing body of evidence points to AI affecting workplace relationships, worker autonomy and psychosocial well-being.
The rapid take-off of AI technologies in Australian workplaces has included data entry automation, document processing, fraud detection and Generative AI tools.
“While promising operational efficiency, these innovations also introduce risks of algorithmic management, the erosion of tacit knowledge, digital incivility and the devaluation of human labour. Current governance frameworks fail to sufficiently address these relational harms,” he warns.
“Bridging this gap requires a shift in how AI is conceptualised, not just as a technical tool or economic input, but as a social actor with the power to shape working relationships, identities and hierarchies.
Drawing on national and international data, the latest research identifies AI-related risks that affects workplace dynamics and employee agency. It also identifies the integration of AI-related risks into Work Health and Safety (WHS) regulations as a key gap in Australia's policy response.
“We propose a framework for managing risks grounded in job crafting, participatory oversight and expanded WHS definitions. In doing so, it positions the worker not as a passive recipient of AI impacts but as a co-designer of workplace transformation,” he says.
“The framework treats the workforce as co-designers not end-users of AI integration. Its mechanisms also build on existing industrial relations infrastructure, including union representation and safety committees.
“When job crafting is legitimised and supported, it enables workers to transform potential threats into sources of meaning and resilience.”
If AI tools are optimised for organisational goals (efficiency, compliance), job crafting optimises for worker values (dignity, purpose, agency), the article says.
Associate Professor Cebulla says the framework addresses the core insight and points to the effects that are “deeply social, often subtle, and frequently overlooked in both policy design and organisational strategy”.
“AI tools do not merely automate, they reconfigure. They change how decisions are made, who holds authority, how performance is interpreted, and what kinds of labour are seen as legitimate.
“As such, they must be governed not only through audits and algorithms but through social institutions, norms and participatory mechanisms that foreground the human experience of work.”
The new article, ‘ AI and workplace relations: A WHS framework for managing relational risks in workplaces’ (2025) by Andreas Cebulla has been published in the journal of Journal of Industrial Relations - DOI: 10. 1177/00221856251392987.
Associate Professor Andreas Cebulla is an affiliate of the Flinders Factory of the Future. He researches the future of work and technology, including the ethical use of artificial intelligence in workplaces, and the social impacts of automation and new technologies.