Date of Award

2020

Document Type

Open Access Master's Thesis

Degree Name

Master of Science in Applied Cognitive Science and Human Factors (MS)

Administrative Home Department

Department of Cognitive and Learning Sciences

Advisor 1

Shane T. Mueller

Committee Member 1

Erich J. Petushek

Committee Member 2

Robert R. Hoffman

Abstract

AI systems are increasingly being fielded to support diagnoses and healthcare advice for patients. One promise of AI application is that they might serve as the first point of contact for patients, replacing routine tasks, and allowing health care professionals to focus on more challenging and critical aspects of healthcare. For AI systems to succeed, they must be designed based on a good understanding of how physicians explain diagnoses to patients, and how prospective patients understand and trust the systems providing the diagnosis, as well as the explanations they expect. In this thesis, I examine this problem across three studies. In the first study, I interviewed physicians to explore their explanation strategies in re-diagnosis scenarios. I identified five broad categories of explanation strategies and I developed a generic diagnostic timeline of explanations from the interviews. For the second study, I tested an AI diagnosis scenario and found that explanation helps improve patient satisfaction measures for re-diagnosis. Finally, in a third study I implemented different forms of explanation in a similar diagnosis scenario and found that visual and example-based explanation integrated with rationales had a significantly better impact on patient satisfaction and trust than no explanations, or with text-based rationales alone. Based on these studies and the review of the literature, I provide some design recommendations for the explanations offered for AI systems in the healthcare domain.

Share

COinS