TY - JOUR
T1 - Multi-stakeholder preferences for the use of artificial intelligence in healthcare
T2 - a systematic review and thematic analysis
AU - Vo, Vinh
AU - Chen, Gang
AU - Aquino, Yves Saint James
AU - Carter, Stacy M.
AU - Do, Quynh Nga
AU - Woode, Maame Esi
N1 - Funding Information:
Funding for the study was supported by the National Health and Medical Research Council grant “The algorithm will see you now: ethical, legal and social implications of adopting machine learning systems for diagnosis and screening”. The grant has no role in study design, in the collection, analysis and interpretation of data, in the writing of the articles or in the decision to submit for publication.
Funding Information:
Funding for the study was supported by the National Health and Medical Research Council grant “The algorithm will see you now: ethical, legal and social implications of adopting machine learning systems for diagnosis and screening”. The grant has no role in study design, in the collection, analysis and interpretation of data, in the writing of the articles or in the decision to submit for publication.We would like to acknowledge the contributions of Jenny Fafeita, David Horne, and Gabby Lamb, the dedicated librarians at Monash University who supported the authors to conduct NVivo and literature search for the review. We are also thankful for supports provided by Associate Professor Duncan Mortimer and PhD researchers at the Centre for Health Economics, Monash University during the development of this review.
Publisher Copyright:
© 2023 The Authors
PY - 2023/12
Y1 - 2023/12
N2 - Introduction: Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals’ perspectives to understand these issues from multiple perspectives. Methodology: A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human – AI relationship. Results: The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. Conclusions: While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
AB - Introduction: Despite the proliferation of Artificial Intelligence (AI) technology over the last decade, clinician, patient, and public perceptions of its use in healthcare raise a number of ethical, legal and social questions. We systematically review the literature on attitudes towards the use of AI in healthcare from patients, the general public and health professionals’ perspectives to understand these issues from multiple perspectives. Methodology: A search for original research articles using qualitative, quantitative, and mixed methods published between 1 Jan 2001 to 24 Aug 2021 was conducted on six bibliographic databases. Data were extracted and classified into different themes representing views on: (i) knowledge and familiarity of AI, (ii) AI benefits, risks, and challenges, (iii) AI acceptability, (iv) AI development, (v) AI implementation, (vi) AI regulations, and (vii) Human – AI relationship. Results: The final search identified 7,490 different records of which 105 publications were selected based on predefined inclusion/exclusion criteria. While the majority of patients, the general public and health professionals generally had a positive attitude towards the use of AI in healthcare, all groups indicated some perceived risks and challenges. Commonly perceived risks included data privacy; reduced professional autonomy; algorithmic bias; healthcare inequities; and greater burnout to acquire AI-related skills. While patients had mixed opinions on whether healthcare workers suffer from job loss due to the use of AI, health professionals strongly indicated that AI would not be able to completely replace them in their professions. Both groups shared similar doubts about AI's ability to deliver empathic care. The need for AI validation, transparency, explainability, and patient and clinical involvement in the development of AI was emphasised. To help successfully implement AI in health care, most participants envisioned that an investment in training and education campaigns was necessary, especially for health professionals. Lack of familiarity, lack of trust, and regulatory uncertainties were identified as factors hindering AI implementation. Regarding AI regulations, key themes included data access and data privacy. While the general public and patients exhibited a willingness to share anonymised data for AI development, there remained concerns about sharing data with insurance or technology companies. One key domain under this theme was the question of who should be held accountable in the case of adverse events arising from using AI. Conclusions: While overall positivity persists in attitudes and preferences toward AI use in healthcare, some prevalent problems require more attention. There is a need to go beyond addressing algorithm-related issues to look at the translation of legislation and guidelines into practice to ensure fairness, accountability, transparency, and ethics in AI.
KW - Artificial intelligence
KW - General public
KW - Health professional
KW - Healthcare
KW - Patients
UR - http://www.scopus.com/inward/record.url?scp=85176112345&partnerID=8YFLogxK
U2 - 10.1016/j.socscimed.2023.116357
DO - 10.1016/j.socscimed.2023.116357
M3 - Review Article
C2 - 37949020
AN - SCOPUS:85176112345
SN - 0277-9536
VL - 338
JO - Social Science & Medicine
JF - Social Science & Medicine
M1 - 116357
ER -