Bias of Machine Learning Methods in Health Care

Overview

The following is a list of interesting and important papers discussing various issues of bias and fairness of Machine Learning (ML) methods as used in healthcare applications. Papers are sorted by broad category, and there is a link to each paper.

You can read a summary literature review of the broad issues here.

This work will be presented at the Toronto Data Workshop. Slides will be uploaded after the event.

What is Algorithmic Bias in Health Care

Vaughn, J., Baral, A., Vadari, M., Boag, W. (2020). Dataset Bias in Diagnostic AI system: Guidelines for Dataset Collection and Usage.

Cheng, L., Varshney, K.R., Liu, H. (2021). Socially Responsible AI Algorithms: Issues, Purposes, and Challenges. arXiv2101.02032

Dwork, C., Hardt, M., Pitassi, T., Reingold, O., and Zemel, R. Fairness through awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference, pp. 214–226, 2012.

Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A. (2019) A Survey on Bias and Fairness in Machine Learning. arXiv:1908.09635.

Noorbakhsh-Sabet, N., Zand, R., Zhang, Y., Abedi, V. (2019). Artificial Intelligence Transforms the Future of Health Care, 132(7), 795-801.

Saxena, N.A., Huang, K., DeFilippis, E., Radanovic, G., Parkes, D.C., & Liu, Y. (2019). How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness. arxiv:1811.03654

Verheij, R.A., Curcin, V., Delaney, B.C., & McGilchrist. (2018). Possible Sources of Bias in Primary Care Electronic Health Record Data Use and Reuse. Journal of Medical Internet Research, 20(5): 1-14.

Measurements of Bias and Fairness

Agarwal, A., Beygelzimer, A., Dudık, M., and Langford, J. A reductionist approach to fair classification. In Fairness, Accountability, and Transparency in Machine Learning (FATML), 2017.

Caton, S., & Haas, C. Fairness in Machine Learning: A Survey. arXiv:2010.04053

Chen, I.Y., Johansson, F.D., & Sontag, D. (2018). Why Is My Classifier Discriminatory? arXiv:1805.12002.

Chouldechova, A., Roth, A. (2020). A snapshot of the frontiers of fairness in machine learning. Commun. ACM 63(5), 82–89.

Liu, Y. Radanovic, G. Dimitrakakis, C. Mandal, D. & Parkes, D. C. (2017). Calibrated fairness in bandits. arXiv:1707.01875.

McDonald, N. & Pan, S. (2020). Intersectional AI: A Study of How Information Science Students Think about Ethics and Their Impact. AMC Journal Vol 4 (CSCW2).

Zemel, R., Wu, Y., Swersky, K., & Pitassi, T, Dwork, C. (2013). Learning fair representations. Proceedings of the 30th International Conference on Machine Learning, PMLR 28(3):325-333, 2013.

Impacts on Racialized Populations

Adamson, A. & Smith, A. (2018). Machine Learning and Health Care Disparities in Dermatology. American Medical Association, 154:11, 1247-1248.

Sjoding, M.W., Dickson, R.P., Iwashyna, T.J., & Gay, S.E. (2020). Racial Bias in Pulse Oximetry Measurement. The New England Journal of Medicine, 383:2477-2478.

Chen, I.Y., Szolovits, P. & Ghassemi, M. (2019). Can AI Help Reduce Disparities in General Medical and Mental Health Care? American Medical Association Journal of Ethics, 21(2):167-179.

Boag, W., Suresh, H., Celi, L.A., Szolovits, P., & Ghassemi, M. (2019). Modeling Mistrust in End-of-Life Care. arxiv:1807.00124.

Obermeyer, Z., Powers, B., Vogeli, C., & Sendhill, M. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science (366): 447-453.