Download PDF
Editorial  |  Open Access  |  21 Jan 2024

The problem with patient avatars and emotions

Views: 183 |  Downloads: 50 |  Cited:   0
Art Int Surg 2024;4:23-7.
10.20517/ais.2024.01 |  © The Author(s) 2024.
Author Information
Article Notes
Cite This Article

THE ROLE OF EMOTION IN PATIENT INTERACTIONS

Human-to-human interaction and communication is deceivingly complex with respect to how audio and visual inputs are detected and processed. For example, speech is a product of language, tone, amplitude, and style of delivery. Speech can be delivered smoothly or statacco, intentionally slow or erratic, as well as in other configurations. Visual inputs include facial gestures, body motion, and eye contact, among many others. The context of communication further complicates the process; where the communication is taking place, such as on the battlefield, in the classroom, or in the physician’s office, will impact the recipient’s interpretation of the message. It has been widely demonstrated that infants are innately primed for language acquisition, enabling them to differentiate between all phonemes in the world’s languages - approximately 150 distinct utterances[1,2].

However, every human interaction is also an emotional interaction. The rise of patient avatars in the healthcare industry built for patient triage and potentially clinical management raises questions about its benefits and associated risks[3]. Motivations for the development of such Artificial intelligence (AI) are grounded in the fact that over the past years, healthcare providers’ compensation has not kept pace with rising inflation, leading to increased shortages of personnel. AI is being positioned to fill this void. The rationale is that virtual avatars may be able to help triage patients and eliminate some of the more routine and tedious aspects of the patient-healthcare interaction.

One of the central questions that the introduction of avatars in the healthcare industry raises is whether the avatars can validly and reliably detect emotional states in the patient, and whether the avatar’s expressions appropriately match the situation[4]. For example, the probability of the AI to detect suicidal ideation in a patient and provide the appropriate compassionate response required is currently unknown. Furthermore, avatar images have become so realistic and life-like that it is conceivable that some individuals, including hearing- or sight-impaired patients, may confuse the avatars for real people. It is conceivable that an avatar that is responding using inappropriate emotional cues could incorrectly reassure a patient with urgent needs, thus resulting in a delay in seeking medical attention with potentially deleterious ramifications.

REPLACEMENT OF HEALTHCARE WORKFORCE BY AI

The expansion of Conversational Computing, a branch of AI, within the healthcare vertical has witnessed a migration of simplistic chatbots to ultra-realistic humans represented as avatars. Present-day avatars can engage in multilingual conversations, including configurable gesturing, resulting in a hyper-realistic avatar-to-human interaction. Healthcare-related avatar uses include disease screening and administrative tasks, such as appointment scheduling, surveying, and data gathering in general. The benefits of the use of AI-driven avatars relate to healthcare economics, and continually rising costs related to care delivery. Avatars possess attributes that are highly desirable from a workforce management perspective. They are multilingual, can operate within an omni-channel environment, possess longitudinal memory, and operate at scale within a 24/7 environment. They also do not suffer from fatigue, burnout, anxiety, and other common human conditions that inhibit productivity in work environments.

The COVID-19 pandemic imposed additional hardships upon the global healthcare workforce. It resulted in increased instances of worker anxiety, employee burnout, and workforce resignation, as well as a transition to a more decentralized model of healthcare delivery where telemedicine platforms were introduced to bridge the physical divide between healthcare providers and remote patients. This transition from a centralized to a decentralized healthcare delivery model provided the opening for the aggressive utilization of avatars. Patients were now given direct access to their providers within a digital format. Interestingly, once healthcare systems began to recognize the value of telemedicine-based encounters, they soon realized that the pre-COVID-19 healthcare economics delivery paradigm was radically shifting. Avatars originating within an economic construct would become an integral component of the workforce, either by working alone or within a hybrid human-avatar model.

Unfortunately, as AI algorithms become more complex and intricate, the loss of healthcare workers is becoming increasingly likely. This trend will accelerate as compensation for healthcare providers stagnates and hospital systems increasingly substitute human resources for AI-powered systems to treat patients[5]. This dystopian eventuality has many unforeseen risks that are not being analyzed before AI products are brought to market. Aside from potentially creating a void of human interactions in the workplace, where more and more people are unemployed and potentially without a means to contribute to society and derive fulfillment from life, we do not fully understand the impact of AI-powered avatars and other applications on healthcare delivery in general, and patient care specifically.

PATIENT AVATARS

Emotion analytics are highly complex and still in their infancy. We do not know about the potential pitfalls of current AI technologies claiming to be able to detect human emotions from facial expressions or detect personality traits from scraping social media data inputs[6]. We may have to play a disclaimer before every patient-avatar interaction; however, this can lead to attention fatigue, as has been seen when people tune out car alarms and airplane safety demonstrations and their associated warnings.

Even though companies are being sold AI-based tools that are purportedly designed to predict behavioral outcomes, such as amicability or conscientiousness in the workplace, or point of purchase behaviors, things that are notoriously difficult to measure, informed healthcare entities who purchase the AI need to be able to ask the basic questions that underlie the assumptions of AI technology to determine its legitimacy. It is the assumptions that underlie the algorithms, and the assumptions and the methods themselves that can be faulty or too reductive. How the data outputs from AI are interpreted will determine whether the technology is useful or safe.

AI technology in healthcare should serve as an adjunct to human-to-human interaction and not be made to replace human workers in healthcare[7]. The problem with autonomous actions replacing humans is not limited to non-interventional healthcare providers[8,9]. Surgeons are becoming increasingly aware of the possibility that more autonomously functioning surgical robots will be able to do more procedures and theoretically be able to replace surgeons, interventional radiologists, cardiologists, pulmonologists, and gastroenterologists entirely[10]. This possibility is tempered by the fact that humans will need to be trained in interventional procedures such as surgery and endoscopy and will need to be able to take over if future robots have technical issues.

The question is how to develop more autonomously functioning robots for procedures while continuing to train people to know how to do these procedures. This existential crisis applies throughout all aspects of life and poses an important ethical question that humans must address before we permit the continued integration of AI into our lives[11]. If you think that it is unrealistic that a robot will one day perform a surgical procedure autonomously, you should be aware that planes are already landing with autopilot on systems termed Autoland. If we are already risking the lives of 300 patients per flight, the regulations for one patient will clearly not be insurmountable.

Where AI has particular use is in the increased ability of doctors and nurses to have access to automatically generated checklists that can autonomously alert medical teams to anomalies that require further testing, consultations with specialists, transfers to intensive care units, or even surgery[12]. This should lead to fewer misdiagnoses and medical errors.

A NEED FOR INCREASED REGULATION?

The Editorial Board of Artificial Intelligence Surgery has recently published a White Paper discussing the need for a new designation of risk for surgical devices that utilize AI[10]. Whether or not the level of risk should call for increased or decreased regulation is currently at the center of the debate. In psychiatry and psychology, AI is currently being touted as the next frontier, with large randomized-controlled trials testing the use of AI-assisted robots that provide psychotherapy, despite the absence of sufficient data to argue for its efficacy and safety[13]. AI technology is only as effective as the assumptions that underlie the algorithms.

For example, AI that is geared towards facial emotion detection analysis that assumes a smile represents happiness, a frown represents sadness, and a scowl represents anger will not only oversimplify the various iterations of emotional expressions as they relate to actual emotional experiences, but will also assume that emotional expressions are universal and culturally generalizable. The likelihood of such AI to produce Type I and Type II error rates is therefore very high, and the consequences in clinical use cases can be disastrous. A notable example of this is the observation that computer vision has been shown to more frequently attribute anger to images of black people compared to other ethnicities[14].

The differences between person-to-person interactions and human-avatar interactions, particularly as they pertain to healthcare, must consider the centrality of emotions as part of the triage process. While we ponder whether regulations for AI-enhanced instruments used by interventional physicians should be decreased, we believe there is a need for increased regulation of AI systems that interact with patients.

DECLARATIONS

Authors’ contributions

Made substantial contributions to the conception and design of the study and performed data analysis and interpretation: Gumbs AA, Roubeni S, Grasso SV

Performed data acquisition, as well as providing administrative, technical, and material support: Gumbs AA, Roubeni S, Grasso SV

Availability of data and materials

Not applicable.

Financial support and sponsorship

None.

Conflict of interest

Gumbs AA is the Editor-in-Chief of Artificial Intelligence Surgery Journal. Grasso SV is the Editorial board member of Artificial Intelligence Surgery Journal.

Ethical approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Copyright

© The Author(s) 2024.

REFERENCES

1. Hutauruk BS. Children first language acquisition at age 1-3 years old In Balata. IOSR J Human Soc Sci 2015;20:51-7. Available from: https://www.iosrjournals.org/iosr-jhss/papers/Vol20-issue8/Version-5/F020855157.pdf. [Last accessed on 19 Jan 2024]

2. Toyoda G, Brown EC, Matsuzaki N, Kojima K, Nishida M, Asano E. Electrocorticographic correlates of overt articulation of 44 English phonemes: intracranial recording in children with focal epilepsy. Clin Neurophysiol 2014;125:1129-37.

3. Sestino A, D’Angelo A. My doctor is an avatar! The effect of anthropomorphism and emotional receptivity on individuals’ intention to use digital-based healthcare services. Technol Forecast Soc Change 2023;191:122505.

4. Barrett LF, Adolphs R, Marsella S, Martinez AM, Pollak SD. Emotional expressions reconsidered: challenges to inferring emotion from human facial movements. Psychol Sci Public Interest 2019;20:1-68.

5. Rashidian N, Abu Hilal M. Applications of machine learning in surgery: ethical considerations. Art Int Surg 2022;2:18-23.

6. Moriuchi E. Leveraging the science to understand factors influencing the use of AI-powered avatar in healthcare services. J Technol Behav Sci 2022;7:588-602.

7. Moor M, Banerjee O, Abad ZSH, et al. Foundation models for generalist medical artificial intelligence. Nature 2023;616:259-65.

8. Gumbs AA, Frigerio I, Spolverato G, et al. Artificial intelligence surgery: how do we get to autonomous actions in surgery? Sensors 2021;21:5526.

9. Gumbs AA, Grasso V, Bourdel N, et al. The advances in computer vision that are enabling more autonomous actions in surgery: a systematic review of the literature. Sensors 2022;22:4918.

10. Gumbs AA, Alexander F, Karcz K, et al. White paper: definitions of artificial intelligence and autonomous actions in clinical surgery. Art Int Surg 2022;2:93-100.

11. Spolverato G, Capelli G, Majidi D, Frigerio I. Statement on artificial intelligence surgery by women-in-surgery - Italia: can artificial intelligence be the great equalizer in surgery? Art Int Surg 2021;1:18-21.

12. Wagner M, Bodenstedt S, Daum M, et al. The importance of machine learning in autonomous actions for surgical decision making. Art Int Surg 2022;2:64-79.

13. Su S, Wang Y, Jiang W, et al. Efficacy of artificial intelligence-assisted psychotherapy in patients with anxiety disorders: a prospective, national multicenter randomized controlled trial protocol. Front Psychiatry 2021;12:799917.

14. Rhue L. Racial influence on automated perceptions of emotions. SSRN 2018.

Cite This Article

Export citation file: BibTeX | RIS

OAE Style

Gumbs AA, Roubeni S, Grasso SV. The problem with patient avatars and emotions. Art Int Surg 2024;4:23-7. http://dx.doi.org/10.20517/ais.2024.01

AMA Style

Gumbs AA, Roubeni S, Grasso SV. The problem with patient avatars and emotions. Artificial Intelligence Surgery. 2024; 4(1): 23-7. http://dx.doi.org/10.20517/ais.2024.01

Chicago/Turabian Style

Gumbs, Andrew A., Sonia Roubeni, S. Vincent Grasso. 2024. "The problem with patient avatars and emotions" Artificial Intelligence Surgery. 4, no.1: 23-7. http://dx.doi.org/10.20517/ais.2024.01

ACS Style

Gumbs, AA.; Roubeni S.; Grasso SV. The problem with patient avatars and emotions. Art. Int. Surg. 2024, 4, 23-7. http://dx.doi.org/10.20517/ais.2024.01

About This Article

© The Author(s) 2024. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, sharing, adaptation, distribution and reproduction in any medium or format, for any purpose, even commercially, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Data & Comments

Data

Views
183
Downloads
50
Citations
0
Comments
0
3

Comments

Comments must be written in English. Spam, offensive content, impersonation, and private information will not be permitted. If any comment is reported and identified as inappropriate content by OAE staff, the comment will be removed without notice. If you have any queries or need any help, please contact us at support@oaepublish.com.

0
Download PDF
Cite This Article 4 clicks
Like This Article 3 likes
Share This Article
Scan the QR code for reading!
See Updates
Contents
Figures
Related
Artificial Intelligence Surgery
ISSN 2771-0408 (Online)
Follow Us

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/

Portico

All published articles will be preserved here permanently:

https://www.portico.org/publishers/oae/