Modern 3D Medical Image Segmentation is typically done using a sliding window approach due to GPU memory constraints. However, this presents an interesting trade-off between the amount of global context the network sees at once, versus the proportion of foreground voxels available in each training sample. It is known already that UNets perform worse with low global context, but enlarging the context comes at the cost of heavy class imbalance between background (typically very large) and foreground (much smaller) while training. In this abstract, we analyze the behavior of Transformer-based (UNETR) and attention gated (Attention-Unet) models along with vanilla-Unets across this trade-off. We explore this using a synthetic data set, and a subset of the spleen segmentation data set from the Medical Segmentation Decathlon to demonstrate our results. Beyond showing that all three types of networks prefer more global context rather than bigger foreground-to-background ratios, we find that UNETR and attention-Unet appear to be less robust than vanilla-Unet to drifts between training versus test foreground ratios.
Amith Kamath (University of Bern)
My name is Amith (meaning ‘infinite’ in Sanskrit) and I enjoy investigating problems in image analysis and building tools to solve them. I like mathematics as applied to gain a better understanding of what we see (naturally, or otherwise). I am currently a doctoral student with Prof. Mauricio Reyes in the Medical Image Analysis lab, at the ARTORG Center, learning more about pixel-level segmentation using Deep Learning models and its’ robustness in clinical settings as applied to radiotherapy planning. Earlier, I learnt vision and robotics at Georgia Tech remotely, and wrote a masters’ dissertation at Minnesota, focusing on reducing MRI acquisition times while maintaining accurate orientation measurement of white matter fibers in our brains. Along the way, I wrote code for image/vision at the MathWorks, built technical content for undergraduate courses, ran interactive workshops/seminars across India, all in the broad areas of computer science, biomedical engineering, and mathematics.
Yannick Suter (ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland)
Suhang You (ARTORG, University of Bern)
Michael Mueller (ARTORG Center for Biomedical Engineering Research, University of Bern,)
Jonas Willmann (University Hospital Zurich, University of Zurich)
Nicolaus Andratschke (University Hospital Zurich, University of Zurich)
Mauricio Reyes (University of Bern)
More from the Same Authors
2021 : Attention Shift: Interpretability Study of Texture-based Data Augmentation in Training U-Net Models for Brain Image Segmentation »
Suhang You · Mauricio Reyes
2022 : Metrics Reloaded »
Annika Reinke · Lena Maier-Hein · Patrick Scholz · Minu D. Tizabi · Evangelia Christodoulou · Ben Glocker · Fabian Isensee · Jens Kleesiek · Michal Kozubek · Mauricio Reyes · Michael A. Riegler · Manuel Wiesenfarth · Michael Baumgartner · Matthias Eisenmann · Doreen Heckmann-Nötzel · A. Kavur · Tim Rädsch · Laura Acion · Michela Antonelli · Tal Arbel · Spyridon Bakas · Pete Bankhead · Arriel Benis · Florian Buettner · M. Jorge Cardoso · Veronika Cheplygina · Beth Cimini · Gary Collins · Keyvan Farahani · Luciana Ferrer · Adrian Galdran · Bram van Ginneken · Robert Haase · Daniel Hashimoto · Michael Hoffman · Merel Huisman · Pierre Jannin · Charles Kahn · Dagmar Kainmueller · Alexandros Karargyris · Bernhard Kainz · Alan Karthikesalingam · Hannes Kenngott · Florian Kofler · Annette Kopp-Schneider · Anna Kreshuk · Tahsin Kurc · Bennett Landman · Geert Litjens · Amin Madani · Klaus H. Maier-Hein · Anne Martel · Peter Mattson · Erik Meijering · Bjoern Menze · David Moher · Karel G.M. Moons · Henning Mueller · Brennan Nichyporuk · Felix Nickel · Jens Petersen · Nasir Rajpoot · Nicola Rieke · Julio Saez-Rodriguez · Clarisa Sanchez · Shravya Shetty · Maarten van Smeden · Carole Sudre · Ronald Summers · Abdel Aziz Taha · Sotirios Tsaftaris · Ben Ben Van Calster · Gaël Varoquaux · Paul Jäger