Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Medical Imaging meets NeurIPS

Enhancing Annotator Efficiency: Automated Partitioning of a Lung Ultrasound Dataset by View

Bennett VanBerlo · Delaney Smith · Jared Tschirhart · Blake VanBerlo · Derek Wu · Alex Ford · Joseph McCauley · Benjamin Wu · Rushil Chaudhary · Chintan Dave · Jordan Ho · Jason Deglint · Brian Li · Robert Arntfield


Abstract:

Annotating large medical imaging datasets is an arduous and expensive task, especially when distinct labels must be applied to disjoint subsets of a dataset. When collecting lung ultrasound (LUS) data, the transducer probe is placed in one of two locations on the chest resulting in clips from two distinct views. Each of these views interrogates different anatomic areas of the lungs and must be annotated for separate and distinct pathological features. In this work, we propose a method that exploits this implicit hierarchical organization to optimize annotator efficiency. Specifically, we trained a machine learning model to accurately distinguish between LUS views using an annotated training set of 2908 clips. In a downstream expository view-specific annotation task, we found that automatically partitioning a 780-clip dataset by view saved 42 minutes of manual annotation time and resulted in 55 ± 6 additional relevant labels per hour. We propose that this method can be applied to other hierarchical annotation schemes.

Chat is not available.