Information Fusion-2-Text: Explainable Aggregation via Linguistic Protoforms

Document Type

Conference Proceeding

Publication Date



Department of Electrical and Computer Engineering; College of Computing


Recent advancements and applications in artificial intelligence (AI) and machine learning (ML) have highlighted the need for explainable, interpretable, and actionable AI-ML. Most work is focused on explaining deep artificial neural networks, e.g., visual and image captioning. In recent work, we established a set of indices and processes for explainable AI (XAI) relative to information fusion. While informative, the result is information overload and domain expertise is required to understand the results. Herein, we explore the extraction of a reduced set of higher-level linguistic summaries to inform and improve communication with non-fusion experts. Our contribution is a proposed structure of a fusion summary and method to extract this information from a given set of indices. In order to demonstrate the usefulness of the proposed methodology, we provide a case study for using the fuzzy integral to combine a heterogeneous set of deep learners in remote sensing for object detection and land cover classification. This case study shows the potential of our approach to inform users about important trends and anomalies in the models, data and fusion results. This information is critical with respect to transparency, trustworthiness, and identifying limitations of fusion techniques, which may motivate future research and innovation.

Publisher's Statement

© 2020, Springer Nature Switzerland AG. Publisher’s version of record:

Publication Title

Communications in Computer and Information Science