Date of Award

2021

Document Type

Open Access Master's Thesis

Degree Name

Master of Science in Applied Cognitive Science and Human Factors (MS)

Administrative Home Department

Department of Cognitive and Learning Sciences

Advisor 1

Shane T. Mueller

Committee Member 1

Robert R. Hoffman

Committee Member 2

Jon Sticklen

Abstract

An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (XAI). XAI aims to improve human understanding and trust in machine intelligence and automation by providing users with visualizations and other information explaining the AI’s decisions, actions, or plans and thereby to establish justified trust and reliance. XAI systems have primarily used algorithmic approaches designed to generate explanations automatically that help understanding underlying information about decisions and establish justified trust and reliance, but an alternate that may augment these systems is to take advantage of the fact that user understanding of AI systems often develops through self-explanation (Mueller et al., 2021). Users attempt to piece together different sources of information and develop a clearer understanding, but these self-explanations are often lost if not shared with others. This thesis research demonstrated how this ‘Self-Explanation’ could be shared collaboratively via a system that is called collaborative XAI (CXAI). It is akin to a Social Q&A platform (Oh, 2018) such as StackExchange. A web-based system was built and evaluated formatively and via user studies. Formative evaluation will show how explanations in an XAI system, especially collaborative explanations, can be assessed based on ‘goodness criteria’ (Mueller et al., 2019). This thesis also investigated how the users performed with the explanations from this type of XAI system. Lastly, the research investigated whether the users of CXAI system are satisfied with the human-generated explanations generated in the system and check if the users can trust this type of explanation.

Share

COinS