For Patients to Trust Medical AI, They Need to Understand It PDF

Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...

Document Details

AccommodativeAmethyst

Uploaded by AccommodativeAmethyst

2021

Chiara Longoni, Romain Cadario, and Carey K. Morewedge

Tags

medical AI healthcare patient trust technology

Summary

This Harvard Business Review (HBR) article examines why patients are hesitant to trust medical AI. The authors' research, involving online experiments and a Google Ads field study, suggests that consumers lack a true understanding of how medical AI arrives at its conclusions and hold unrealistic beliefs about human doctor's diagnostic abilities. Improving transparency in how AI makes medical decisions is key to building trust and encouraging broader adoption.

Full Transcript

HBR / Digital Article / For Patients to Trust Medical AI, They Need to Understand It For Patients to Trust Medical AI, They Need to Understand It by Chiara Longoni, Romain Cadario, and Carey K. Morewedge Published on HBR.org / September 03, 2021 / Reprint H06JVC Pulse/Getty Images Artificial intelli...

HBR / Digital Article / For Patients to Trust Medical AI, They Need to Understand It For Patients to Trust Medical AI, They Need to Understand It by Chiara Longoni, Romain Cadario, and Carey K. Morewedge Published on HBR.org / September 03, 2021 / Reprint H06JVC Pulse/Getty Images Artificial intelligence-enabled health applications for diagnostic care are becoming widely available to consumers; some can even be accessed via smartphones. Google, for instance, recently announced its entry into this market with an AI-based tool that helps people identify skin, hair, and nail conditions. A major barrier to the adoption of these technologies, however, is that consumers tend to trust medical AI less than human health care providers. They believe that medical AI fails to cater to their unique needs and performs worse than comparable human providers, and Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 1 HBR / Digital Article / For Patients to Trust Medical AI, They Need to Understand It they feel that they cannot hold AI accountable for mistakes in the same way they could a human. This resistance to AI in the medical domain poses a challenge to policymakers who wish to improve health care and to companies selling innovative health services. Our research provides insights that could be used to overcome this resistance. In a paper recently published in Nature Human Behaviour, we show that consumer adoption of medical AI has as much to do with their negative perceptions of AI care providers as with their unrealistically positive views of human care providers. Consumers are reluctant to rely on AI care providers because they do not believe they understand or objectively understand how AI makes medical decisions; they view its decisionmaking as a black box. Consumers are also reluctant to utilize medical AI because they erroneously believe they better understand how humans make medical decisions. Our research — consisting of five online experiments with nationally representative and convenience samples of 2,699 people and an online field study on Google Ads — shows how little consumers understand about how medical AI arrives at its conclusions. For instance, we tested how much nationally representative samples of Americans knew about how AI care providers make medical decisions such as whether a skin mole is malignant or benign. Participants performed no better than they would have if they had guessed; they would have done just as well if they picked answers at random. But participants recognized their ignorance: They rated their understanding of how AI care providers make medical decisions as low. By contrast, participants overestimated how well they understood how human doctors make medical decisions. Even though participants in our experiments possessed similarly little factual understanding of decisions Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 2 HBR / Digital Article / For Patients to Trust Medical AI, They Need to Understand It made by AI and human care providers, they claimed to better understand how human decision-making worked. In one experiment, we asked a nationally representative online sample of 297 U.S. residents to report how much they understood about how a doctor or an algorithm would examine images of their skin to identify cancerous skin lesions. Then we asked them to explain the human or the algorithmic provider’s decision-making processes. (This type of intervention that has been used before to shatter illusory beliefs about how well one understands causal processes. Most people, for instance, believe they understand how a helicopter works. Only when you ask them to explain how it works, do they realize they have no idea.) After participants tried to provide an explanation, they rated their understanding of the human or algorithmic medical decision-making process again. We found that forcing people to explain the human or algorithmic provider’s decision-making processes reduced the extent to which participants felt that they understood decisions made by human providers but not decisions made by algorithmic providers. That’s because their subjective understanding of how doctors made decisions had been inflated and their subjective understanding of how AI providers made decisions was unaffected by having to provide an explanation — possibly because the had already felt the latter was a black box. In another experiment, with a nationally representative sample of 803 Americans, we measured both how well people subjectively felt that they understood human or algorithmic decision-making processes for diagnosing skin cancer and then tested them to see how well they actually did understand them. To do this, we created a quiz with the aid of medical experts: a team of dermatologists at a medical school in the Netherlands and a team of developers of a popular skin-cancer-detection application in Europe. We found that although participants reported a poorer subjective understanding of medical decisions made by algorithms than decisions Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 3 HBR / Digital Article / For Patients to Trust Medical AI, They Need to Understand It made by human providers, they possessed a similarly limited real understanding of decisions made by human and algorithmic providers. What can policymakers and firms do to encourage consumer uptake of medical AI? We found two successful, slightly different interventions that involved explaining how providers — both algorithmic and human — make medical decisions. In one experiment, we explained how both types of providers use the ABCD framework (asymmetry, border, color, and diameter) to examine features of a mole to make a malignancy-risk assessment. In another experiment, we explained how both types of providers examine the visual similarity between a target mole and other moles known to be malignant. These interventions successfully reduced the difference in perceived understanding of algorithmic and human decision-making by increasing the perceived understanding of the former. In turn, the interventions increased participants’ intentions to utilize algorithmic care providers without reducing their intentions to utilize human providers. The efficacy of these interventions is not confined to the laboratory. In a field study on Google Ads, we had users see one of two different ads for a skin-cancer-screening application in their search results. One ad offered no explanation and the other briefly explained how the algorithm works. After a five-day campaign, the ad explaining how the algorithm works produced more clicks and a higher click-through rate. AI-based health care services are instrumental to the mission of providing high-quality and affordable services to consumers in developed and developing nations. Our findings show how greater transparency — opening the AI black box — can help achieve this critical mission. Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 4 HBR / Digital Article / For Patients to Trust Medical AI, They Need to Understand It Chiara Longoni is an assistant professor of marketing at Boston University’s Questrom School of Business. Follow her on Twitter @longoni_chiara. Romain Cadario is an assistant professor of marketing at Erasmus University’s Rotterdam School of Management. Carey K. Morewedge is a professor of marketing and Everett W. Lord Distinguished Faculty Scholar at Boston University. Follow him on Twitter @morewedge. Copyright © 2021 Harvard Business School Publishing Corporation. All rights reserved. 5 Copyright 2021 Harvard Business Publishing. All Rights Reserved. Additional restrictions may apply including the use of this content as assigned course material. Please consult your institution's librarian about any restrictions that might apply under the license with your institution. For more information and teaching resources from Harvard Business Publishing including Harvard Business School Cases, eLearning products, and business simulations please visit hbsp.harvard.edu.

Use Quizgecko on...
Browser
Browser