Explainable AI for the Choquet Integral

Document Type

Article

Publication Date

7-27-2020

Department

Department of Electrical and Computer Engineering

Abstract

The modern era of machine learning is focused on data-driven solutions. While this has resulted in astonishing leaps in numerous applications, explainability has not witnessed the same growth. The reality is, most machine learning solutions are black boxes. Herein, we focus on data/information fusion in machine learning. Specifically, we explore four eXplainable Artificial Intelligence (XAI) questions relative to Choquet integral; (i) what is the quality of our inputs and their interactions, (ii) how is the information being combined, (iii) what is the quality of our training data (and thus our learned models), and (iv) what trust do we place in an output? Previously, we derived an initial set of indices for (i)–(iv) on the premise of perfect knowledge. Herein, we make XAI more accurate by taking into consideration what the machine learned. A combination of synthetic data and real-world experiments from remote sensing for fusing deep learners in the context of classification are explored. Our approach leads to performance gain, insights into what was learned, and it helps us realize better future solutions.

Publication Title

IEEE Transactions on Emerging Topics in Computational Intelligence

Share

COinS