Your Source of Innovation in the Medical Field
Artificial IntelligenceEventsFeaturedPicksTechnologies

Trust and Artificial Intelligence in Healthcare

Trust and Artificial Intelligence in Healthcare
Which factors influence human trust in artificial intelligence? (Credit: Getty Images)

What factors influence human trust in artificial intelligence (AI)? And how can trust in AI be optimized to improve medical decision-making and enhance patient outcomes? These are questions that need to be answered before AI can realize its full potential in the healthcare workspace, as experts explained during a conference session at Virtual CES 2021.

AI is a rapidly developing technology with the potential to disrupt healthcare on a massive scale. Machine learning algorithms are increasingly capable of performing tasks with greater accuracy, efficiency and effectiveness than healthcare professionals, including everything from triaging patients for medical attention to identifying trends in huge quantities of clinical data. On the flip side, AI-powered systems still lack “human” qualities that are considered important in the provision of healthcare, such as trustworthiness and the ability to express empathy and compassion. This has made many clinicians cautious about the use of AI in medical diagnosis.

Pat Baird, Regulatory Head of Global Software Standards for Philips and one of the thought leaders taking part in the CES webinar, said he believed three different categories of trust need to be addressed:

>> The first is technical trust, related to the data used to train the AI.

>> The second is human trust, related to the usability of the system.

>> The third is regulatory trust, relating to frameworks and standards, as well as the ethical, legal and social implications of AI.

According to Baird, those developing AI systems need to eliminate bad data as much as possible and ensure their algorithms are trained on non-biased data samples. Such systems should also be user-friendly, with an intuitive interface that helps to overcome human-machine barriers. Leveraging input from medical professionals during system development could be critical in helping to foster trust at an early stage.

According to Pat Baird, those developing AI systems need to eliminate bad data as much as possible and ensure their algorithms are trained on non-biased data samples. (Credit: iStock)
Those developing AI systems need to eliminate bad data as much as possible and ensure their algorithms are trained on non-biased data samples. (Credit: iStock)

Quality Control

A clear set of regulations and standards is also important when it comes to establishing trust in AI. Baird said:

“Standards can help set the expectation of what ‘good’ looks like. There is so much hype and so many questionable claims about AI products and applications right now that we need standards to help differentiate between the good and the bad. We know how to do quality controls—period—regardless of the product or the type, and I think we can reuse a lot of that. The details are different, but I think overall we have a good headstart.”

Regulation of AI is complicated by the fact that not all AI tools are considered medical devices (and therefore aren’t regulated by bodies such as the United States Food and Drug Administration). Companies aren’t obliged to share details about the role of specific software within AI systems either. Nevertheless, providing data and information relating to performance, intended use and input requirements can help to increase trust, as can regular software evaluation.

Christina Silcox, a digital health fellow at the Duke-Margolis Center for Health Policy and another webinar participant, said:

“The key to trustworthy AI is for manufacturers to build AI that deserves trust.”

READ ALSO |  AI, Transhumanism, Post-Humanity: Between Fear and Imagination
Advertisement
Advertisement
Advertisement
Advertisement
Advertisement