Skip to yearly menu bar Skip to main content


Poster

The Cells Out of Sample (COOS) dataset and benchmarks for measuring out-of-sample generalization of image classifiers

Alex Lu · Amy Lu · Wiebke Schormann · Marzyeh Ghassemi · David Andrews · Alan Moses

East Exhibition Hall B, C #124

Keywords: [ Data, Challenges, Implementations, and Software ] [ Data Sets or Data Repositories ] [ Algorithms -> Classification; Applications -> Computational Biology and Bioinformatics; Applications ] [ Computer Vision; Applic ]


Abstract:

Understanding if classifiers generalize to out-of-sample datasets is a central problem in machine learning. Microscopy images provide a standardized way to measure the generalization capacity of image classifiers, as we can image the same classes of objects under increasingly divergent, but controlled factors of variation. We created a public dataset of 132,209 images of mouse cells, COOS-7 (Cells Out Of Sample 7-Class). COOS-7 provides a classification setting where four test datasets have increasing degrees of covariate shift: some images are random subsets of the training data, while others are from experiments reproduced months later and imaged by different instruments. We benchmarked a range of classification models using different representations, including transferred neural network features, end-to-end classification with a supervised deep CNN, and features from a self-supervised CNN. While most classifiers perform well on test datasets similar to the training dataset, all classifiers failed to generalize their performance to datasets with greater covariate shifts. These baselines highlight the challenges of covariate shifts in image data, and establish metrics for improving the generalization capacity of image classifiers.

Live content is unavailable. Log in and register to view live content