110 FEATURE the healthcare experience, they also cite concerns regarding data privacy, risks of the technology causing unexpected harm, and more. Forrester’s Generative AI Impact On Clinicians: Bringing The Fever Down study highlights that 40 per cent of physicians claim AI is overhyped and will not meet expectations. An October 2024 survey by The Alan Turing Institute found that while 52 per cent of UK doctors are optimistic about AI’s potential in healthcare, nearly one-third do not fully understand the risks and almost 70 per cent have not received adequate training on their responsibilities when using these systems. Patients are similarly sceptical. PwC’s Healthcare Survey 2024 suggests that only one in five is willing to use AI tools for routine tasks, such as booking appointments or refilling prescriptions. The enthusiasm for the remaining 80 per cent is “tempered by worries about data privacy and the quality of care”. Likewise, the ‘Patients’ Trust in Health Systems to Use Artificial Intelligence’ research paper published in the Journal of the American Medical Association Network Open in 2025 indicates that almost 66 per cent of US adults have low trust in healthcare systems to use AI responsibly and 57.7 per cent are doubtful they would be protected against AI-related harm. “There are two major reasons individuals and organisations may not trust AI: concerns it may cause harm and fear of the technology taking their jobs,” says Rhew. To overcome concerns about AI replacing humans, the healthcare sector must address the widening AI skills gap, advises Rhew. “We have to develop AI skilling and reskilling programmes that allow individuals to secure a job in a world that is becoming increasingly AI-enabled,” he says. “The first step is to improve AI literacy in the workforce. The second is to train individuals on how to use AI, starting with how to do prompt engineering. And the third is to define new AI-enabled job requirements and develop career programmes that allow people to secure these roles.” Building trust in AI is also essential to drive the widespread adoption of the technology. “People need to feel confident that AI will not cause harm to individuals and society,” says Rhew. “Operationalising responsible AI principles, transparency of goals and processes, and continued multi-stakeholder dialogue will help with this.” In 2024, Microsoft joined forces with 16 healthcare providers and two community health organisations to form the Trustworthy & Responsible AI Network (TRAIN) to make highquality, safe and trustworthy AI tools equally accessible to every healthcare organisation. “TRAIN is a healthcare system-led consortium whose primary aim is to operationalise responsible AI principles in a time-, resource- and cost-efficient manner,” says Rhew. “Members may apply three approaches to accomplish this goal: one, leveraging technologies that promote and enable responsible AI use; two, redistributing workloads involved with testing; and three, monitoring AI models through standardised collaborations with other TRAIN members. They can also partner with other members and AI developers to share the cost, time and resources involved with testing and monitoring AI models. Today, approximately 50 health systems in the USA and several more in Europe are members of TRAIN. “When it comes to AI’s tremendous capabilities, there is no doubt the technology has the potential to transform healthcare. However, the processes for implementing the technology responsibly are just as vital. By working together, TRAIN members aim to establish best practices for operationalising responsible AI, helping improve patient outcomes and safety while fostering trust in healthcare AI.” Physicians can use Microsoft's new Dragon Copilot to expedite tasks such as writing up notes following consultations with patients
RkJQdWJsaXNoZXIy NzQ1NTk=