Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks
Document Type
Article
Publication Date
7-1-2020
Department
Department of Electrical and Computer Engineering
Abstract
Information fusion is an essential part of numerous engineering systems and biological functions, e.g., human cognition. Fusion occurs at many levels, ranging from the low-level combination of signals to the high-level aggregation of heterogeneous decision-making processes. While the last decade has witnessed an explosion of research in deep learning, fusion in neural networks has not observed the same revolution. Specifically, most neural fusion approaches are ad hoc, are not understood, are distributed versus localized, and/or explainability is low (if present at all). Herein, we prove that the fuzzy Choquet integral (ChI), a powerful nonlinear aggregation function, can be represented as a multilayer network, referred to hereafter as ChIMP. We also put forth an improved ChIMP (iChIMP) that leads to a stochastic-gradient-descent-based optimization in light of the exponential number of ChI inequality constraints. An additional benefit of ChIMP/iChIMP is that it enables explainable artificial intelligence (XAI). Synthetic validation experiments are provided, and iChIMP is applied to the fusion of a set of heterogeneous architecture deep models in remote sensing. We show an improvement in model accuracy, and our previously established XAI indices shed light on the quality of our data, model, and its decisions.
Publication Title
IEEE Transactions on Fuzzy Systems
Recommended Citation
Islam, M.,
Anderson, D.,
Pinar, A.,
Havens, T. C.,
Scott, G.,
&
Keller, J.
(2020).
Enabling Explainable Fusion in Deep Learning with Fuzzy Integral Neural Networks.
IEEE Transactions on Fuzzy Systems,
28(7), 1291-1300.
http://doi.org/10.1109/TFUZZ.2019.2917124
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/2059
Publisher's Statement
© 1993-2012 IEEE. Publisher’s version of record: https://doi.org/10.1109/TFUZZ.2019.2917124