Censored Fairness through Awareness
Document Type
Conference Proceeding
Publication Date
6-27-2023
Abstract
There has been increasing concern within the machine learning community and beyond that Artificial Intelligence (AI) faces a bias and discrimination crisis which needs AI fairness with urgency. As many have begun to work on this problem, most existing work depends on the availability of class label for the given fairness definition and algorithm which may not align with real-world usage. In this work, we study an AI fairness problem that stems from the gap between the design of a “fair” model in the lab and its deployment in the real-world. Specifically, we consider defining and mitigating individual unfairness amidst censorship, where the availability of class label is not always guaranteed due to censorship, which is broadly applicable in a diversity of real-world socially sensitive applications. We show that our method is able to quantify and mitigate individual unfairness in the presence of censorship across three benchmark tasks, which provides the first known results on individual fairness guarantee in analysis of censored data.
Publication Title
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023
ISBN
9781577358800
Recommended Citation
Zhang, W.,
Hernandez-Boussard, T.,
&
Weiss, J.
(2023).
Censored Fairness through Awareness.
Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023,
37, 14611-14619.
Retrieved from: https://digitalcommons.mtu.edu/michigantech-p2/45