Affinity Workshop: Women in Machine Learning

Probabilistic Interactive Segmentation for Medical Images

Hallee Wong · John Guttag · Adrian Dalca


Deep learning models have been very successful at performing medical imaging tasks such as segmentation or registration. However, training these models requires substantial amounts of labeled data, most often annotated manually. Segmenting new medical images to create labeled training data is a tedious and time-consuming process for human annotators, particularly for 3D modalities involving sets of images. Existing frameworks for interactive segmentation have focused on minimizing initial user interaction and training domain-specific models with limited generalizability. Most interactive segmentation systems have two stages: first, the user provides initial input to seed a rough predicted segmentation, and then they provide additional feedback to refine the segmentation over multiple iterations. When segmenting objects or modalities not seen in the training data, these systems may require the user to make many corrections to clarify their target if the initial predicted segmentation focuses on the wrong object or they wish to segment multiple noncontiguous objects. We propose a probabilistic interactive segmentation system to help human annotators quickly and accurately segment new medical images. At each iteration this system takes in an input image and the partial segmentation completed so far, and probabilistically predicts a next step for the segmentation i.e., a larger partial segmentation. We focus on predicting several possible segmentations, to enable the user to quickly choose the correct next step in ambiguous situations.

Chat is not available.