Explaining Explanation For “Explainable Ai”
Department of Cognitive and Learning Sciences
What makes for an explanation of “black box” AI systems such as Deep Nets? We reviewed the pertinent literatures on explanation and derived key ideas. This set the stage for our empirical inquiries, which include conceptual cognitive modeling, the analysis of a corpus of cases of "naturalistic explanation" of computational systems, computational cognitive modeling, and the development of measures for performance evaluation. The purpose of our work is to contribute to the program of research on “Explainable AI.” In this report we focus on our initial synthetic modeling activities and the development of measures for the evaluation of explainability in human-machine work systems.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting
Hoffman, R. R.,
Explaining Explanation For “Explainable Ai”.
Proceedings of the Human Factors and Ergonomics Society 2018 Annual Meeting,
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/1095