Assessing Satisfaction in and Understanding of a Collaborative Explainable AI (Cxai) System through User Studies

Document Type

Article

Publication Date

10-27-2022

Department

Department of Cognitive and Learning Sciences

Abstract

Modern artificial intelligence (AI) and machine learning (ML) systems have become more capable and more widely used, but often involve underlying processes their users do not understand and may not trust. Some researchers have addressed this by developing algorithms that help explain the workings of the system using ‘Explainable’ AI algorithms (XAI), but these have not always been successful in improving their understanding. Alternatively, collaborative user-driven explanations may address the needs of users, augmenting or replacing algorithmic explanations. We evaluate one such approach called “collaborative explainable AI” (CXAI). Across two experiments, we examined CXAI to assess whether users’ mental models, performance, and satisfaction improved with access to user-generated explanations. Results showed that collaborative explanations afforded users a better understanding of and satisfaction with the system than users without access to the explanations, suggesting that a CXAI system may provide a useful support that more dominant XAI approaches do not.

Publication Title

Proceedings of the Human Factors and Ergonomics Society Annual Meeting

Share

COinS