Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: Women in Machine Learning

Multimodal Checklists for Fair Clinical Decision Support

Qixuan Jin · Marzyeh Ghassemi


Abstract:

Machine learning algorithms trained on biased data can often replicate or exacerbate existing data biases. Current clinical risk scores systems embed race into the basic data used to individualize risk assessments. Algorithms that adopt these clinical support scores may similarly propagate the embedded biases. In this work, we focus on improving the fairness of clinical decision support checklist models in a multimodal setting. Our previous work has established that medical checklists can be learned directly from health data with fairness constraints, i.e., the false positive rate for any subgroup - like Black women - should not be over 20% of that in any other group. This initial work focused purely on tabular data. However, medical data is inherently multimodal. The fusion of multiple data sources such as vitals, labs, and clinical notes can be essential for training intervention prediction models. Other work has demonstrated that multimodal learning can be difficult for deep neural networks that greedily over-optimize to a single input stream. In comparison with high-capacity models, we hope to investigate the behavior of multimodal fusion in the relatively simpler checklist models. [See full abstract in pdf]

Chat is not available.