Insights and characterization of l1-norm based sparsity learning of a lexicographically encoded capacity vector for the Choquet integral
Department of Electrical and Computer Engineering, Center for Data Sciences
The aim of this paper is the simultaneous minimization of model error and model complexity for the Choquet integral. The Choquet integral is a generator function, that is, a parametric function that yields a wealth of aggregation operators based on the specifics of the underlying fuzzy measure (aka normal and monotonic capacity). It is often the case that we desire to learn an aggregation operator from data and the goal is to have the smallest possible sum of squared error (SSE) between the trained model and a set of labels or function values. However, we also desire to learn the “simplest” solution possible, viz., the model with the fewest number of inputs. Previous works focused on the use of l 1 -norm regularization of a lexicographically encoded capacity vector relative to the Choquet integral, describing how to carry out the procedure and demonstrating encouraging results. However, no characterization or insights into the capacity and integral were provided. Herein, we investigate the impact of l 1 -norm regularization of a lexicographically encoded capacity vector in terms of what capacities and aggregation operators it strives to induce in different scenarios. Ultimately, this provides insight into what the regularization is really doing and when to apply such a method. Synthetic experiments are performed to illustrate the remarks, propositions, and concepts put forth.
2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE)
Havens, T. C.
Insights and characterization of l1-norm based sparsity learning of a lexicographically encoded capacity vector for the Choquet integral.
2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE).
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p/1047