Timezone: »

OLIVES Dataset: Ophthalmic Labels for Investigating Visual Eye Semantics
Mohit Prabhushankar · Kiran Kokilepersaud · Yash-yee Logan · Stephanie Trejo Corona · Ghassan AlRegib · Charles Wykoff

Thu Dec 01 02:00 PM -- 04:00 PM (PST) @ Hall J #1019

Clinical diagnosis of the eye is performed over multifarious data modalities including scalar clinical labels, vectorized biomarkers, two-dimensional fundus images, and three-dimensional Optical Coherence Tomography (OCT) scans. Clinical practitioners use all available data modalities for diagnosing and treating eye diseases like Diabetic Retinopathy (DR) or Diabetic Macular Edema (DME). Enabling usage of machine learning algorithms within the ophthalmic medical domain requires research into the relationships and interactions between all relevant data over a treatment period. Existing datasets are limited in that they neither provide data nor consider the explicit relationship modeling between the data modalities. In this paper, we introduce the Ophthalmic Labels for Investigating Visual Eye Semantics (OLIVES) dataset that addresses the above limitation. This is the first OCT and near-IR fundus dataset that includes clinical labels, biomarker labels, disease labels, and time-series patient treatment information from associated clinical trials. The dataset consists of 1268 near-IR fundus images each with at least 49 OCT scans, and 16 biomarkers, along with 4 clinical labels and a disease diagnosis of DR or DME. In total, there are 96 eyes' data averaged over a period of at least two years with each eye treated for an average of 66 weeks and 7 injections. We benchmark the utility of OLIVES dataset for ophthalmic data as well as provide benchmarks and concrete research directions for core and emerging machine learning paradigms within medical image analysis.

Author Information

Mohit Prabhushankar (Georgia Institute of Technology)

Mohit Prabhushankar received his Ph.D. degree in electrical engineering from the Georgia Institute of Technology (Georgia Tech), Atlanta, Georgia, 30332, USA, in 2021. He is currently a Postdoctoral Research Fellow in the School of Electrical and Computer Engineering at the Georgia Institute of Technology in the Omni Lab for Intelligent Visual Engineering and Science (OLIVES). He is working in the fields of machine learning, active learning, and robust and explainable AI. He is the recipient of the Best Paper award at ICIP 2019 and Top Viewed Special Session Paper Award at ICIP 2020. He is the recipient of the ECE Outstanding Graduate Teaching Award, the CSIP Research award, and of the Roger P Webb ECE Graduate Research Assistant Excellence award, all in 2022.

Kiran Kokilepersaud (Georgia Institute of Technology)
Yash-yee Logan (Georgia Institute of Technology)
Yash-yee Logan

My PhD research primarily focuses on human-in-the-loop, multi-modal deep learning applications applied to image and video processing. Specifically, my work integrates expert insights in the form of meta-data into the deep learning framework to guide the decision-making of neural networks for medical and autonomous vehicle applications. I am driven, hardworking, and complete all projects on time to an excellent standard. I also have excellent interpersonal and communication skills. You can find out more about the work my lab and I does here: https://ghassanalregib.info/

Stephanie Trejo Corona (Rice University)
Ghassan AlRegib (Georgia Institute of Technology)
Charles Wykoff

More from the Same Authors