Timezone: »

UniverSeg: Universal Medical Image Segmentation
Victor Butoi · Jose Javier Gonzalez Ortiz · Tianyu Ma · John Guttag · Mert Sabuncu · Adrian Dalca

While deep learning models are widely used in medical image segmentation, they are typically not designed to generalize to unseen segmentation tasks involving new anatomies, image modalities, or labels. Generally, given a new segmentation task, researchers will design and train a new model or fine-tune existing models. This is time-consuming, even for machine learning researchers, and poses a substantial barrier for clinical researchers, who often lack the resources or expertise to train new models. In this paper, we present a model that can solve new unseen medical segmentation tasks in a single forward pass at inference without retraining or fine-tuning. Our task-amortization model, UniverSeg, can segment a wide range of datasets as well as generalize to new ones. A UniverSeg network takes as input the target image to be segmented and a small set of example images and label maps representing the desired task and outputs a segmentation map. We train the proposed model on a large collection of over 85 medical imaging datasets with varying anatomies and modalities. This encourages the model to be task-agnostic and instead learn to transfer the relevant information from the example set to the target image, enabling segmentation even in tasks unseen during training. In preliminary experiments, we find that using only one trained UniverSeg model to segment previously unseen tasks can achieve performance close to that of models specifically trained on those new tasks.

Author Information

Victor Butoi (MIT)
Jose Javier Gonzalez Ortiz (MIT)
Tianyu Ma (Cornell University )
John Guttag (Massachusetts Institute of Technology)
Mert Sabuncu (Cornell)
Adrian Dalca (MIT, HMS)

More from the Same Authors