Calls for global guidelines for safer AI use in medicine

Publicly released:
Australia; International; VIC; QLD; SA
iStock-Don Wu
iStock-Don Wu

A review led by Adelaide University researchers has found there's a lack of clear guidelines around the early testing of AI tools in health clinics, during a process known as silent trials. The global scoping review looked at this early phase of testing and revealed huge variations in the way the trials are being conducted and the measures used to assess the effectiveness of the tools.

News release

From: Adelaide University

A world-first review led by Adelaide University researchers has found there's a lack if clear guidelines around the early testing of AI tools in health clinics, during a process known as silent trials.

The global scoping review looked at this early phase of testing and revealed huge variations in the way the trials are being conducted and the measures used to assess the effectiveness of the tools.

“This lack of guidance around silent trials is concerning as AI models can be unpredictable and difficult to use in real-world settings if they haven’t been tested thoroughly,” said corresponding author Lana Tikhomirov, a PhD candidate from Adelaide University’s Australian Institute for Machine Learning.

“Some of the trials in our review focused on AI metrics that weren’t clinically useful, while others looked at the bare minimum with no details on how the model performed in a clinical setting.

“If these AI tools are rolled out without comprehensive testing and things go wrong, it could expose both patients and clinicians to harmful advice.”

Silent trials are when AI models are tested in their intended setting for use, but the results don’t influence patient care as they aren’t given to the clinical team at the time of treatment.

Currently there are no formal guidelines on how to conduct these trials, which researchers say are critical to ensure an AI tool will be useful and beneficial in a local setting.

“Silent trials are a low-risk way to test technology without compromising patient outcomes,” said co-author Associate Professor Melissa McCradden, who is the Deputy Director of Adelaide University’s Australian Institute for Machine Learning, AI Director at the Women’s and Children’s Health Network and Hospital Research Foundation Fellow in Paediatric AI Ethics.

“We know that many AI models fail when they’re introduced into real-world settings and an AI tool that works in one hospital may not work in another.

“Conducting comprehensive silent trials that adhere to a clear set of international guidelines is critical if we want to successfully take AI tools from bench to bedside.”

The scoping review has been published in Nature Healthand is part of a larger study looking at silent phase evaluations for healthcare AI.

Project CANAIRI – Collaboration for Translational Artificial Intelligence Trials – is focusing on developing guidance for silent trials to ensure health settings in which AI is intended to be used are ready and able to do so in a beneficial way.

“Ultimately, we would like to see silent trials become a mandatory part of the process of adopting AI tools in medicine,” said Associate Professor McCradden, who is the lead of Project CANAIRI.

Journal/
conference:
Nature Health
Research:Paper
Organisation/s: Adelaide University, The University of Queensland, RMIT University, The University of Melbourne
Funder: Open access funding provided by Adelaide University.
Media Contact/s
Contact details are only visible to registered journalists.