Date of Award

2023

Document Type

Open Access Dissertation

Degree Name

Doctor of Philosophy in Applied Cognitive Science and Human Factors (PhD)

Administrative Home Department

Department of Cognitive and Learning Sciences

Advisor 1

Shane T. Mueller

Committee Member 1

Kelly S. Steelman

Committee Member 2

Erich J. Petushek

Committee Member 3

Nathan L. Tenhundfeld

Abstract

Explainable AI (XAI) systems primarily focus on algorithms, integrating additional information into AI decisions and classifications to enhance user or developer comprehension of the system's behavior. These systems often incorporate untested concepts of explainability, lacking grounding in the cognitive and educational psychology literature (S. T. Mueller et al., 2021). Consequently, their effectiveness may be limited, as they may address problems that real users don't encounter or provide information that users do not seek.

In contrast, an alternative approach called Collaborative XAI (CXAI), as proposed by S. Mueller et al (2021), emphasizes generating explanations without relying solely on algorithms. CXAI centers on enabling users to ask questions and share explanations based on their knowledge and experience to facilitate others' understanding of AI systems. Mamun, Hoffman, et al. (2021) developed a CXAI system akin to a Social Question and Answer (SQA) platform (S. Oh, 2018a), adapting it for AI system explanations. The system successfully passed evaluation based on XAI metrics Hoffman, Mueller, et al. (2018), as implemented in a master’s thesis by Mamun (2021), which validated its effectiveness in a basic image classification domain and explored the types of explanations it generated.

This Ph.D. dissertation builds upon this prior work, aiming to apply it in a novel context: users and potential users of self-driving semi-autonomous vehicles. This approach seeks to unravel communication patterns within a social QA platform (S. Oh, 2018a), the types of questions it can assist with, and the benefits it might offer users of widely adopted AI systems.

Initially, the feasibility of using existing social QA platforms as explanatory tools for an existing AI system was investigated. The study found that users on these platforms collaboratively assist one another in problem-solving, with many resolutions being reached (Linja et al., 2022). An intriguing discovery was that anger directed at the AI system drove increased engagement on the platform.

The subsequent phase leverages observations from social QA platforms in the autonomous driving (AD) sector to gain insights into an AI system within a vehicle. The dissertation includes two simulation studies employing these observations as training materials. The studies explore users' Level 3 Situational Awareness (Endsley, 1995) when the autonomous vehicle exhibits abnormal behavior. These investigate detection rates and users' comprehension of abnormal driving situations. Additionally, these studies measure the perception of personalization within the context of the training process (Zhang & Curley, 2018), cognitive workload (Hart & Staveland, 1988), trust, and reliance (Körber, 2018) concerning the training process. The findings from these studies are mixed, showing higher detection rates of abnormal driving with training but diminished trust and reliance.

The final study engages current Tesla FSD users in semi-structured interviews (Crandall et al., 2006) to explore their use of social QA platforms, their knowledge sources during the training phase, and their search for answers to abnormal driving scenarios. The results reveal extensive collaboration through social forums and group discussions, shedding light on differences in trust and reliance within this domain.

Share

COinS