Certifying Fair Predictive Models in the Face of Selection Bias

"The widespread use of data-driven algorithmic decision making in crucial areas such as hiring, loan assessments, medical diagnoses, and pretrial release has raised questions about the accuracy and fairness of these algorithms. Selection bias, a prevalent data quality issue in sensitive domains, is a major obstacle to creating fair predictive models. Most existing fair predictive modeling approaches are unable to address selection bias. To overcome this challenge, we introduce a new framework called CRAB that leverages principles of data management and query answering from inconsistent and incomplete databases to produce certifiably fair predictive models.
"
Assistant Professor at University of California
San Diego