What makes a good explanation? Cognitive dimensions of explaining intelligent machines

Document Type

Article

Publication Date

7-2019

Department

Department of Cognitive and Learning Sciences

Abstract

Explainability is assumed to be a key factor for the adoption of Artificial Intelligence systems in a wide range of contexts. The use of AI components in self-driving cars, medical diagnosis, or insurance and financial services has shown that when decisions are taken or suggested by automated systems it is essential for practical, social, and legal reasons that an explanation can be provided to users, developers or regulators. Moreover, the reasons for equipping intelligent systems with explanation capabilities are not limited to user rights and acceptance. Explainability is also needed for designers and developers to enhance system robustness and enable diagnostics to prevent bias, unfairness and discrimination, as well as to increase trust by all users in why and how decisions are made. Against that background, increased efforts are directed towards studying and provisioning explainable intelligent systems, both in industry and academia, sparked by initiatives like the DARPA XAI Program. In parallel, scientific conferences and workshops dedicated to explainability are now regularly organised, such as the ACM Conference on Fairness, Accountability, and Transparency (ACM FAT) or the `Workshop on Explainability in AI' at the 2017 and 2018 editions of the International Joint Conference on Artificial Intelligence. However, one important question remains hitherto unanswered: What are the criteria for a good explanation?

Publication Title

CogSci 2019: Creativity + Cognition + Computation

Share

COinS