Counterfactually Fair Representation

October 27, 2023
Abstract

The use of machine learning models in high-stake applications (e.g., healthcare, lending, college admission) has raised growing concerns due to potential biases against protected social groups. Various fairness notions and methods have been proposed to mitigate such biases. In this work, we focus on Counterfactual Fairness (CF), a fairness notion that is dependent on an underlying causal graph and first proposed by Kusner et al.; it requires that the outcome an individual perceives is the same in the real world as it would be in a ""counterfactual"" world, in which the individual belongs to another social group. Learning fair models satisfying CF can be challenging. It was shown in that a sufficient condition for satisfying CF is to not use features that are descendants of sensitive attributes in the causal graph. This implies a simple method that learns CF models only using non-descendants of sensitive attributes while eliminating all descendants. Although several subsequent works proposed methods that use all features for training CF models, there is no theoretical guarantee that they can satisfy CF. In contrast, this work proposes a new algorithm that trains models using all the available features. We theoretically and empirically show that models trained with this method can satisfy CF.

Download
Publication Type
Paper
Conference / Journal Name
NeurIPS 2023

BibTeX


@inproceedings{
    author = {},
    title = {‌Counterfactually Fair Representation‌},
    booktitle = {Proceedings of NeurIPS 2023‌},
    year = {‌2023‌}
}