Clinicians could be 'blinded' by AI technology when making assessments, HEE report warns
Physiotherapists and other clinicians who are expected to weave artificial intelligence (AI) technology into their practice should be given more specialist training than their counterparts who do not have the same expectations placed on them.
That is one of the key messages contained in a report published report published yesterday (25 October) by Health Education England (HEE) and the NHS AI Lab. The report, titled Developing healthcare workers’ confidence in AI (Part 2), calls for training in AI to be made available to all health and care staff.
With AI technologies already helping clinicians in a number of NHS trials, they could aid attempts to manage diseases such as cancer in the future, the timely report sets out a series of recommendations for education and training providers to enable them to plan, resource, develop and deliver new AI training packages for health and care staff.
The report breaks down the training requirements for AI into five ‘archetypes’, each encompassing a number of varied roles currently being undertaken in the NHS.
The archetypes are
shapers (who set the direction for AI policy and governance at a national level)
drivers (who champion and lead AI development and deployment at a regional or local level)
creators (who create AI technologies for use in healthcare settings)
embedders (who implement, evaluate and monitor AI technologies deployed in healthcare settings)
users (who use AI technologies in healthcare settings)
Individuals acting as each archetype will have different knowledge and skills requirements and require an education package tailored to their roles. For instance, a Driver (champion and lead AI development and deployment at a regional/local level) would have different educational needs to a creator (create AI technologies for use in healthcare settings).
Eric Topol – whose 2019 review advised ministers on how to make the NHS a world leader in using digital technologies to benefit patients – said: ‘This collaborative research from HEE and the NHS AI Lab represents a significant step forward in developing confidence in AI in the healthcare workforce.’
Dr Topol, a cardiologist, geneticist, and digital medicine researcher, added: ‘It is a model for other countries to adopt as we move forward with implementing AI in medical practice.’ To download a copy of Dr Topol’s review, visit: https://topol.hee.nhs.uk/the-topol-review/
An earlier report, produced by the same team, found that most clinicians were unfamiliar with AI technologies. Without appropriate training and support, patients would not equally share in the benefits offered by AI as it is deployed in the NHS in the years ahead, it stressed.
Brhmie Balaram, head of AI research and ethics at the NHS AI Lab, said: ‘For the NHS to wholly embrace new AI technologies so they are adopted equitably across the country it is vital that we ensure all our staff receive appropriate training in AI.
‘This project is only one in a series at the NHS AI Lab to help ensure the workforce and local NHS organisations are ready for the further spread of AI technologies that have been found to be safe, ethical and effective.’
[Steady progress will] also depend on cautious integration of products into clinical pathways; for example, considering when AI technologies are used in a workflow and whether clinicians are “blinded” to AI outputs when making their own clinical assessments
Risk of ‘de-skilling’
The report acknowledges that there is a risk that the clinical workforce could end up becoming de-skilled if they come to rely on AI technologies to carry out for tasks that they previously undertook.
‘Deskilling could affect non-experts, who may defer to AI when completing tasks outside of their area of expertise, as well as experts, who may be unable to maintain and enhance their own clinical judgement skills and confidence if they come to depend on AI technologies.'
The risk of deskilling should be considered when AI technologies are deployed in healthcare settings, including through taking appropriate actions to safeguard clinical expertise.’
The risk can be avoided in part by ‘careful AI product design’, such as considering how information is displayed to users, the report suggests.
‘It will also depend on cautious integration of products into clinical pathways; for example, considering when AI technologies are used in a workflow and whether clinicians are “blinded” to AI outputs when making their own clinical assessments.'
The report adds: 'It may be appropriate that a proportion of cases are handled manually both to retain skills and to allow ongoing monitoring of the AI against human performance on current clinical data. ‘
The report and partnership with Health Education England is part of the NHS AI Labs’ AI Ethics Initiative, which was introduced to support research and practical interventions that can strengthen the ethical adoption of AI in health and care.
The report was written by Dr Mike Nix, a clinical fellow for AI and Workforce at HEE and the NHS AI Lab, George Onisiforou, a research manager for the AI Ethics Initiative at the NHS AI Lab and Dr Annabelle Painter, a clinical fellow for AI and Workforce at HEE and the NHS AI Lab.
To download the full version of the report, visit: https://digital-transformation.hee.nhs.uk/binaries/content/assets/digital-transformation/dart-ed/developingconfidenceinai-oct2022.pdf
Author: Ian A McMillan