Collaborative Explainable AI: A Non-algorithmic Approach to Generating Explanations of AI
Document Type
Conference Proceeding
Publication Date
1-1-2021
Abstract
An important subdomain in research on Human-Artificial Intelligence interaction is Explainable AI (XAI). XAI attempts to improve human understanding and trust in machine intelligence and automation by providing users with visualizations and other information that explain decisions, actions, and plans. XAI approaches have primarily used algorithmic approaches designed to generate explanations automatically, but an alternate route that may augment these systems is to take advantage of the fact that user understanding of AI systems often develops through self-explanation [1]. Users engage in this to piece together different sources of information and develop a clearer understanding, but these self-explanations are often lost if not shared with others. We demonstrate how this ‘Self-Explanation’ can be shared collaboratively via a system we call collaborative XAI (CXAI), akin to a Social Q&A platform [2] such as StackExchange. We will describe the system and evaluate how it supports various kinds of explanations.
Publication Title
Communications in Computer and Information Science
ISBN
9783030901752
Recommended Citation
Mamun, T.,
Hoffman, R.,
&
Mueller, S.
(2021).
Collaborative Explainable AI: A Non-algorithmic Approach to Generating Explanations of AI.
Communications in Computer and Information Science,
1498 CCIS, 144-150.
http://doi.org/10.1007/978-3-030-90176-9_20
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/15628