The Stakeholder Playbook for Explaining AI Systems
Document Type
Article
Publication Date
11-9-2021
Department
Department of Cognitive and Learning Sciences
Abstract
The purpose of the Stakeholder Playbook is to enable system developers to take into account the different ways in which stakeholders need to "look inside" of the AI/XAI systems. Recent work on Explainable AI has mapped stakeholder categories onto explanation requirements. While most of these mappings seem reasonable, they have been largely speculative. We investigated these matters empirically. We conducted interviews with senior and mid-career professionals possessing post-graduate degrees who had experience with AI and/ or autonomous systems, and who had served in a number of roles including former military, civilian scientists working for the government, scientists working in the private sector, and scientists working as independent consultants. The results show that stakeholders need access to others (e.g., trusted engineers, trusted vendors) to develop satisfying mental models of AI systems. and they need to know "how it fails" and "how it misleads" and not just "how it works." In addition, explanations need to support end-users in performing troubleshooting and maintenance activities, especially as operational situations and input data change. End-users need to be able to anticipate when the AI is approaching an edge case. Stakeholders often need to develop an understanding that enables them to explain the AI to someone else and not just satisfy their own sensemaking. We were surprised that only about half of our Interviewees said they always needed better explanations. This and other findings that are apparently paradoxical can be resolved by acknowledging that different stakeholders have different capabilities, different sensemaking requirements, and different immediate goals. In fact, the concept of “stakeholder” is misleading because the people we interviewed served in a variety of roles simultaneously — we recommend referring to these roles rather than trying to pigeonhole people into unitary categories. Different cognitive styles re another formative factor, as suggested by participant comments to the effect that they preferred to dive in and play with the system rather than being spoon-fed an explanation of how it works. These factors combine to determine what, for each given end-user, constitutes satisfactory and actionable understanding. exp
Publication Title
PsyArXiv Preprints
Recommended Citation
Hoffman, R.,
Klein, G.,
Mueller, S. T.,
Jalaeian, M.,
&
Tate, C.
(2021).
The Stakeholder Playbook for Explaining AI Systems.
PsyArXiv Preprints.
http://doi.org/10.31234/osf.io/9pqez
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/15805