This paper examines problems with model fairness and bias vs. non-majority groups, even when information specifically about minority status is not included in the model. The researchers demonstrated a method of analyzing for bias by comparing ROC of two sub-groups, ABROCA. The researchers have published an R package to perform this analysis.A few thoughts as I watched this presentation:
Disputes sufficiency of anti-classification, classification parity, and calibrations of models per subgroup
Fairness -> equal predictive performance across subgroups
ABROCA - compare ROC, different AUC thresholds for different interventions, e.g. email vs. tutor
Slice plot: difference between 2 ROC, area between 2 curves
E.g. majority vs. non-majority group. Confidence interval should be incorporated.
No correlation found between performance and unfairness. Models can be both fair and accurate.
We need to do ABROCA on indicators and insights per subgroup (for binary predictions) during model training, not merely post-hoc
github.com/jpgard/abroca R package