Document Type
Article
Publication Date
2-16-2023
Department
Department of Computer Science
Abstract
The increasing use of Machine Learning (ML) software can lead to unfair and unethical decisions, thus fairness bugs in software are becoming a growing concern. Addressing these fairness bugs often involves sacrificing ML performance, such as accuracy. To address this issue, we present a novel counterfactual approach that uses counterfactual thinking to tackle the root causes of bias in ML software. In addition, our approach combines models optimized for both performance and fairness, resulting in an optimal solution in both aspects. We conducted a thorough evaluation of our approach on 10 benchmark tasks using a combination of 5 performance metrics, 3 fairness metrics, and 15 measurement scenarios, all applied to 8 real-world datasets. The conducted extensive evaluations show that the proposed method significantly improves the fairness of ML software while maintaining competitive performance, outperforming state-of-the-art solutions in 84.6% of overall cases based on a recent benchmarking tool.
Publication Title
arXiv
Recommended Citation
Wang, Z.,
Zhou, Y.,
Qiu, M.,
Haque, I.,
Brown, L.,
He, Y.,
Zhang, W.,
&
et al.
(2023).
Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking.
arXiv.
http://doi.org/10.48550/arXiv.2302.08018
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p2/1121
Version
Publisher's PDF
Publisher's Statement
https://creativecommons.org/publicdomain/zero/1.0/
© 2023 Association for Computing Machinery. Publisher’s version of record:
https://doi.org/10.48550/arXiv.2302.08018