1 in 5 UK GPs could be using AI in their practice despite lack of guidance or clear work policies

Publicly released:
International
Photo by Choong Deng Xiang on Unsplash
Photo by Choong Deng Xiang on Unsplash

A fifth of family doctors (GPs) in the UK seem to have readily incorporated AI into their clinical practice, despite a lack of any formal guidance or clear work policies on the use of these tools, according to international researchers. The team surveyed over 1000 UK GPs on their use of AI chatbots such as ChatGPT, BingAI, Google’s Bard, or “other”, and asked what they used these tools for. More than a quarter used AI tools to generate documentation, and a similar proportion said they used them to generate a differential diagnosis. A quarter also said they used the tools to suggest treatment options. While this kind of survey may not be representative of all UK doctors, the team says the results show doctors and medical trainees need to be fully informed about the pros and cons of AI, particularly due to the risks of inaccuracies, biases, hallucinations, and potential to compromise patient privacy.

Media release

From: BMJ Group

BMJ HEALTH & CARE INFORMATICS

Externally peer reviewed? Yes
Evidence type: Observational; survey data
Subjects: People

Fifth of GPs using AI despite lack of guidance or clear work policies, UK survey suggests

Doctors and medical trainees need to be fully informed about pros and cons of these tools

A fifth of family doctors (GPs) seem to have readily incorporated AI into their clinical practice, despite a lack of any formal guidance or clear work policies on the use of these tools, suggest the findings of an online UK-wide snapshot survey, published in the open access journal BMJ Health & Care Informatics.

Doctors and medical trainees need to be fully informed about the pros and cons of AI, especially because of the inherent risks of inaccuracies (‘hallucinations’), algorithmic biases, and the potential to compromise patient privacy, conclude the researchers.

Following the launch of ChatGPT at the end of 2022, interest in large language model-powered chatbots has soared, and attention has increasingly focused on the clinical potential of these tools, say the researchers.

To gauge current use of chatbots to assist with any aspect of clinical practice in the UK, in February 2024 the researchers distributed an online survey to a randomly chosen sample of GPs registered with the clinician marketing service Doctors.net.uk. The survey had a predetermined sample size of 1000.

The doctors were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google’s Bard; or ‘Other’. And they were subsequently asked what they used these tools for.

Some 1006 GPs completed the survey: just over half the responses came from men (531; 53%) and a similar proportion of respondents (544;54%) were aged 46 or older.

One in five (205; 20%) respondents reported using generative AI tools in their clinical practice. Of these, more than 1 in 4 (29%; 47) reported using these tools to generate documentation after patient appointments and a similar proportion (28%; 45) said they used them to suggest a differential diagnosis. One in four (25%; 40) said they used the tools to suggest treatment options.

The researchers acknowledge that the survey respondents may not be representative of all UK GPs, and that those who responded may have been particularly interested in AI—for good or bad—potentially introducing a level of bias into the findings.

Further research is needed to find out more about how doctors are using generative AI and how best to implement these tools safely and securely into clinical practice, they add.

“These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases,” they say.

And they point out: “[These tools] may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather.

“While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice.”

And they conclude: “The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risks in terms of hallucinations [perception of non-existent patterns or objects], algorithmic biases, and the potential to compromise patient privacy.”

Attachments

Note: Not all attachments are visible to the general public. Research URLs will go live after the embargo ends.

Research BMJ Group, Web page The URL will go live after the embargo lifts.
Journal/
conference:
BMJ Health & Care informatics
Research:Paper
Organisation/s: Uppsala University, Sweden
Funder: The Research Council on Health, Working Life and Welfare
Media Contact/s
Contact details are only visible to registered journalists.