<< Back to News
Primary CareSep 18, 2024

Doctors and medical trainees in UK must be fully informed about pros and cons of AI tools

One GP in five has already incorporated AI into their clinical practice – despite the absence of any formal guidance or clear work policies on the use of these tools. That is the finding of an online UK-wide snapshot survey, published in the open access journal BMJ Health & Care Informatics on 17 September.

Doctors and medical trainees need to be fully informed about the pros and cons of AI, especially because of the inherent risks of inaccuracies (‘hallucinations’), algorithmic biases, and the potential to compromise patient privacy, Charlotte Blease and her fellow researchers conclude.

Following the launch of ChatGPT at the end of 2022, interest in large language model-powered chatbots has soared, and attention has increasingly focused on the clinical potential of these tools, say the researchers.

To gauge current use of chatbots to assist with any aspect of clinical practice in the UK, in February 2024 the researchers distributed an online survey to a randomly chosen sample of GPs registered with the clinician marketing service Doctors.net.uk. The survey had a predetermined sample size of 1,000.

Photo Credit: Shutterstock
One GP in five is using AI despite lack of guidance or clear work policies, suggests survey

Physique
Physique

The doctors were asked if they had ever used any of the following in any aspect of their clinical practice: ChatGPT; Bing AI; Google’s Bard; or ‘Other’. And they were subsequently asked what they used these tools for.

Some 1006 GPs completed the survey: just over half the responses came from men (531; 53%) and a similar proportion of respondents (544; 54%) were aged 46 or older.

One respondent in five (205; 20%) reported using generative AI tools in their clinical practice. Of these, more than one in four (29%; 47) reported using these tools to generate documentation after patient appointments and a similar proportion (28%; 45) said they used them to suggest a differential diagnosis. One in four (25%; 40) said they used the tools to suggest treatment options.

The researchers acknowledge that the survey respondents may not be representative of all UK GPs, and that those who responded may have been particularly interested in AI – for good or bad – potentially introducing a level of bias into the findings. 

The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risks in terms of hallucinations ... algorithmic biases, and the potential to compromise patient privacy [Charlotte Blease et al]

Need for further research

Further research is needed to find out more about how doctors are using generative AI and how best to implement these tools safely and securely into clinical practice, Dr Blease and colleagues note.

‘These findings signal that GPs may derive value from these tools, particularly with administrative tasks and to support clinical reasoning. However, we caution that these tools have limitations since they can embed subtle errors and biases,’ they state.

The authors point out: ‘[These tools] may also risk harm and undermine patient privacy since it is not clear how the internet companies behind generative AI use the information they gather. 

‘While these chatbots are increasingly the target of regulatory efforts, it remains unclear how the legislation will intersect in a practical way with these tools in clinical practice. 

Dr Blease and colleagues conclude: ‘The medical community will need to find ways to both educate physicians and trainees about the potential benefits of these tools in summarising information but also the risks in terms of hallucinations [perception of non-existent patterns or objects], algorithmic biases, and the potential to compromise patient privacy.’

Charlotte Blease is an associate professor, Participatory eHealth & Health Data Research Group, Department of Women's and Children's Health, at Uppsala Universitet, Sweden. She is also based at the Department of Digital Psychiatry, Beth Israel Deaconess Medical Center, Harvard Medical School, in Boston Massachusetts.

To access the full version of the article – titledGenerative artificial intelligence in primary care: an online survey of UK general practitioners Doi 10.1136/bmjhci-2024-101102 – click

Author: Ian A McMillan
Physique
Physique
<< Back to News
By using our site, you acknowledge that you have read and understand our Cookie Policy and Privacy Policy.