*Evaluating the fairness of predictive student models through slicing analysis; Josh Gardner, Christopher Brooks and Ryan Baker. * LAK'19 Best Full Paper Award

*Evaluating the fairness of predictive student models through slicing analysis; Josh Gardner, Christopher Brooks and Ryan Baker. * LAK'19 Best Full Paper Award

by Elizabeth Dalton -
Number of replies: 0

This was one of the most significant presentations I attended at LAK'19. I am not surprised that this was selected for the Best Full Paper award. MDL-65370 was drafted in part as a response to this paper.

Citation:
Gardner, J., Brooks, C., & Baker, R. (2019). Evaluating the Fairness of Predictive Student Models Through Slicing Analysis. Proceedings of the 9th International Conference on Learning Analytics & Knowledge  - LAK19, 225–234. https://doi.org/10.1145/3303772.3303791

My summary:

This paper examines problems with model fairness and bias vs. non-majority groups, even when information specifically about minority status is not included in the model. The researchers demonstrated a method of analyzing for bias by comparing ROC of two sub-groups, ABROCA. The researchers have published an R package to perform this analysis.

A few thoughts as I watched this presentation:


      • Disputes sufficiency of anti-classification, classification parity, and calibrations of models per subgroup

      • Fairness -> equal predictive performance across subgroups

      • ABROCA - compare ROC, different AUC thresholds for different interventions, e.g. email vs. tutor

      • Slice plot: difference between 2 ROC, area between 2 curves

      • E.g. majority vs. non-majority group. Confidence interval should be incorporated.

      • No correlation found between performance and unfairness. Models can be both fair and accurate.

      • We need to do ABROCA on indicators and insights per subgroup (for binary predictions) during model training, not merely post-hoc

      • github.com/jpgard/abroca R package


Average of ratings: -