With rapidly aging populations across the globe, as well as an increasingly short supply of healthcare personnel, healthcare systems are struggling to provide quality care to everyone who needs it. To address this issue, new and improved Artificial Intelligence (AI) systems have the potential to relieve the burden on healthcare systems as well as improve patient care. However, implementing AI in healthcare needs to be done with caution, and managing patient perceptions of AI as well as developing human-like linguistic features in AI is crucial in order to fully leverage the benefits of AI in this sector.
When people think of AI, one common perception of this technology is a sort of all-knowing and genius entity with competencies that far outpace humans. However, this impression is not found universally across all domains. In the research I lead in our lab, people consistently find AI doctors to be less competent and trustworthy than human doctors, even when the AI doctors perform to the same standard as human doctors. Furthermore, when AI doctors do not mimic the speech patterns of their patients (a common linguistic pattern found in human-to-human communication), people view AI doctors as even less competent and trustworthy.
These results are concerning, as quality patient care stems largely from the quality of patient-doctor relationships, whereby relationships characterized by a high degree of trust lead to better patient outcomes. Therefore, negative perceptions of AI doctors may cause patients to be hesitant to follow their healthcare recommendations, or patients may even be unwilling to seek out medical treatment from healthcare systems which use AI.
Fortunately, there are a couple of straightforward ways to mitigate the issue, at least partially. The first is to educate patients on the strengths that AI has in providing healthcare. Patients trust human doctors to a greater extent due to the very apparent schemas that they have of doctors, such as the fact that doctors need to undergo rigorous education, training, and examination, as well as promising to uphold ethical and moral standards when providing healthcare services. Most people have much less clear schemas of AI in healthcare, and thus do not know exactly how to think about AI in this domain and how much to trust AI. Therefore, briefing patients on the AI tools they will be interacting with is important, for example by highlighting how a particular AI has been trained on massive amounts of data that no one doctor could ever be exposed to, as well as having diagnostic accuracy rates that consistently outperform humans.
The second method is to develop AI tools that are linguistically similar to humans. One linguistic phenomenon that is relevant here is called linguistic alignment, which is the tendency for people to mimic the linguistic behavior of their conversational partner, such as reusing the sounds, words, and sentence structures that their partner uses. More linguistic alignment has been found to increase trust and sociality between dialogue partners, and in my research AI doctors that linguistically aligned with their patients were perceived more positively than AI doctors that did not linguistically align. Therefore, developing AI that can analyze the linguistic behavior of patients and then use similar styles of speech in interactions with patients could increase trust and the overall quality of the patient-doctor relationship, leading to better health outcomes overall.
All in all, encountering AI in healthcare settings will increasingly become the norm and not the exception, making it crucial to understand patient perceptions of this technology in order to provide human-centered and effective healthcare. We strive to push forward this goal in the International Research Centre for the Advancement of Health Communication (IRCAHC - https://www.polyu.edu.hk/ircahc/) within the Department of English and Communication at The Hong Kong Polytechnic University.