Visualization and Analysis Tools for Explainable Choquet Integral Regression

Document Type

Conference Proceeding

Publication Date

1-5-2020

Department

College of Computing

Abstract

The Choquet integral (ChI) is an aggregation operator defined with respect to a fuzzy measure (FM). The FM of a ChI encodes the worth of individual subsets of sources of information, and is an excellent tool for nonlinear aggregation. The monotonicity and boundary conditions of the FM limit the ChI to applications such as decision-level fusion. In a recent work, we removed the boundary and monotonicity constraints of the FM to propose a ChI-based regression (CIR) approach that enables capability beyond previously proposed ChI regression methods. However, the number of values in an FM scales as 2d, where d is the number of input sources. Thus, with such a large number of trained parameters, we tend to lose the explainability (or interpretability) of the learned solution-which comes readily with simpler methods like ordinary linear regression. In this paper, we enhance the explainability of CIR by extending our previously-proposed ChI visualization techniques to CIR. We also present a set of evaluation indices that quantitatively evaluate the importance of individual sources and interactions between groups of sources. We train CIR on real-world regression data sets, and the learned models are visualized and analyzed with the proposed methods. The demonstrations of the proposed visualizations and analyses are shown to significantly enhance the explainability of the learned CIR models.

Publisher's Statement

© 2020 IEEE. Publisher’s version of record: https://doi.org/10.1109/SSCI47803.2020.9308471

Publication Title

2020 IEEE Symposium Series on Computational Intelligence, SSCI 2020

ISBN

9781728125473

Share

COinS