Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

Learning SimCLR Representations for Improving Melanoma Whole Slide Images Classification Model Generalization

Yang Jiang · Sean Grullon · Corey Chivers · Vaughn Spurrier · Jiayi Zhao · Julianna Ianni


Abstract:

Contrastive self-supervised learning has emerged in the field of digital pathology, which leverages unlabeled data for learning domain invariant representations in pathology images. However, the downstream models, trained using these representations, often fail to generalize to out-of-distribution (OOD) domains due to differences in scanner, stain, or other site-specific sources of variation. We investigate different considerations in contrastive self-supervised learning to improve downstream model generalization performance. Specifically, we evaluate how different augmentations and training time during SimCLR training can affect the generation of task-specific, domain-invariant features. The trained SimCLR feature extractors were evaluated for downstream melanoma classification. The results show that optimizing SimCLR improves the out-of-distribution melanoma detection task by 21% and 56%, according to classification accuracy and sensitivity respectively. The improved OOD performance can benefit melanoma patient care.

Chat is not available.