FARE: Diagnostics for Fair Ranking using Pairwise Error Metrics
Published in The Web Conference, 2019
In this work we propose to broaden the scope of fairness assessment, which heretofore has largely been limited to classification tasks, to include error-based fairness criteria for rankings. Our approach supports three criteria: Rank Equality, Rank Calibration, and Rank Parity, which cover a broad spectrum of fairness considerations from proportional group representation to error rate similarity. The underlying error metrics are formulated to be rank-appropriate, using pairwise discordance to measure prediction error in a model agnostic fashion. Based on this foundation, we then design a fairness auditing mechanism which captures group differences throughout the entire ranking, generating in-depth, nuanced diagnostics. We demonstrate the efficacy of our error metrics using real-world scenarios, exposing trade-offs among fairness criteria and providing guidance in the selection of fair-ranking algorithms.
Recommended citation: Caitlin Kuhlman, MaryAnn VanValkenburg, Elke Rundensteiner. FARE: Diagnostics for Fair Ranking using Pairwise Error Metrics. The Web Conference (WWW) Web and Society track 2019. http://web.cs.wpi.edu/~cakuhlman/publications/fare.pdf