January ABC Seminar: Stephen Pfohl, PhD – Google Research – “Algorithmic fairness and responsible AI for health equity”

January ABC Seminar: Stephen Pfohl, PhD – Google Research – “Algorithmic fairness and responsible AI for health equity”

Stephen PfohlIn this talk, I aim to present insights into the design of evaluations of machine learning and AI systems to assess properties related to algorithmic fairness and health equity. I argue that such evaluations are contextual and require specification of the intended use, target population, and assumptions regarding the data generating process and measurement mechanism. Through examples from my research, I argue that causal graphical models can serve as key tools for context specification and can aid in understanding and appropriate use of analytical algorithmic fairness techniques. I will further discuss recent work related to evaluation of equity-related biases in large language models.

Speaker:  Stephen Pfohl, PhD
Affiliation: Google Research
Position:  Senior Research Scientist
Host: Anurag Vaidya, Mahmood Lab

Date: Monday January 27, 2025
Time: 4:00-5:00PM ET
Zoom: https://partners.zoom.us/j/82163676866
Meeting ID: 821 6367 6866

Research Links: Personal siteGoogle Scholar

Stephen Pfohl is a senior research scientist at Google Research. His work focuses on the incorporation of fairness, distribution shift, and equity considerations into the design and evaluation of machine learning systems in healthcare contexts. Stephen earned his PhD in Biomedical Informatics from Stanford University.

Algorithmic fairness in artificial intelligence for medicine and healthcare: Nature Biomedical Engineering

Algorithmic fairness in artificial intelligence for medicine and healthcare: Nature Biomedical Engineering

In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device.

Chen RJ, Wang JJ, Williamson DFK, Chen TY, Lipkova J, Lu MY, Sahai S, Mahmood F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng. 2023 06; 7(6):719-742. PMID: 37380750.