Timezone: »
A major challenge faced during the pandemic has been the prediction of survival and the risk for additional injuries in individual patients, which requires significant clinical expertise and additional resources to avoid further complications. In this study we propose COVID-Net Biochem, an explainability-driven framework for building machine learning models to predict patient survival and the chance of developing kidney injury during hospitalization from clinical and biochemistry data in a transparent and systematic manner. In the first clinician-guided initial design'' phase, we prepared a benchmark dataset of carefully selected clinical and biochemistry data based on clinician assessment, which were curated from a patient cohort of 1366 patients at Stony Brook University. A collection of different machine learning models with a diversity of gradient based boosting tree architectures and deep transformer architectures was designed and trained specifically for survival and kidney injury prediction based on the carefully selected clinical and biochemical markers. In the second
explainability-driven design refinement'' phase, we harnessed explainability methods to not only gain a deeper understanding into the decision-making process of the individual models, but also identify the overall impact of the individual clinical and biochemical markers to identify potential biases. These explainability outcomes are further analyzed by a clinician with over eight years experience to gain a deeper understanding of clinical validity of decisions made. These explainability-driven insights gained alongside the associated clinical feedback are then leveraged to guide and revise the training policies and architectural design in an iterative manner to improve not just prediction performance but also improve clinical validity and trustworthiness of the final machine learning models. Using the proposed explainable-driven framework, we achieved 97.4\% accuracy in survival prediction and 96.7\% accuracy in predicting kidney injury complication, with the models made available in an open source manner. While not a production-ready solution, the ultimate goal of this study is to act as a catalyst for clinical scientists, machine learning researchers, as well as citizen scientists to develop innovative and trust-worthy clinical decision support solutions for helping clinicians around the world manage the continuing pandemic.
Author Information
Hossein Aboutalebi (University of Waterloo)
Maya Pavlova (University of Waterloo)
Mohammad Javad Shafiee (University of Waterloo)
Adrian Florea
Andrew Hryniowski (DarwinAI / University of Waterloo)
Alexander Wong (University of Waterloo)
More from the Same Authors
-
2020 : COVIDNet-S: SARS-CoV-2 lung disease severity grading of chest X-rays using deep convolutional neural networks »
Alexander Wong -
2021 : COVID-Net Clinical ICU: Enhanced Prediction of ICU Admission for COVID-19 Patients via Explainability and Trust Quantification »
Audrey Chung · Mahmoud Famouri · Andrew Hryniowski · Alexander Wong -
2021 : Graph Convolutional Networks for Multi-modality Movie Scene Segmentation »
Yaoxin Li · Alexander Wong · Mohammad Javad Shafiee -
2021 : MAPLE: Microprocessor A Priori for Latency Estimation »
Saad Abbasi · Alexander Wong · Mohammad Javad Shafiee -
2022 : Faster Attention Is What You Need: A Fast Self-Attention Neural Network Backbone Architecture for the Edge via Double-Condensing Attention Condensers »
Alexander Wong · Mohammad Javad Shafiee · Saad Abbasi · Saeejith Nair · Mahmoud Famouri -
2022 : A Fair Loss Function for Network Pruning »
Robbie Meyer · Alexander Wong -
2022 : COVIDx CT-3: A Large-scale, Multinational, Open-Source Benchmark Dataset for Computer-aided COVID-19 Screening from Chest CT Images »
Hayden Gunraj · Tia Tuinstra · Alexander Wong -
2022 : COVIDx CXR-3: A Large-Scale, Open-Source Benchmark Dataset of Chest X-ray Images for Computer-Aided COVID-19 Diagnostics »
Maya Pavlova · Tia Tuinstra · Hossein Aboutalebi · Andy Zhao · Hayden Gunraj · Alexander Wong -
2021 : Live Q&A session: MAPLE: Microprocessor A Priori for Latency Estimation »
Saad Abbasi · Alexander Wong · Mohammad Javad Shafiee -
2021 : Contributed Talk (Oral): MAPLE: Microprocessor A Priori for Latency Estimation »
Saad Abbasi · Alexander Wong · Mohammad Javad Shafiee -
2020 : Lightning Talk 1: Insights into Fairness through Trust: Multi-scale Trust Quantification for Financial Deep Learning »
Alexander Wong · Andrew Hryniowski · Xiao Yu Wang -
2019 : Coffee Break & Poster Session 2 »
Juho Lee · Yoonho Lee · Yee Whye Teh · Raymond A. Yeh · Yuan-Ting Hu · Alex Schwing · Sara Ahmadian · Alessandro Epasto · Marina Knittel · Ravi Kumar · Mohammad Mahdian · Christian Bueno · Aditya Sanghi · Pradeep Kumar Jayaraman · Ignacio Arroyo-Fernández · Andrew Hryniowski · Vinayak Mathur · Sanjay Singh · Shahrzad Haddadan · Vasco Portilheiro · Luna Zhang · Mert Yuksekgonul · Jhosimar Arias Figueroa · Deepak Maurya · Balaraman Ravindran · Frank NIELSEN · Philip Pham · Justin Payan · Andrew McCallum · Jinesh Mehta · Ke SUN -
2018 : Poster presentations »
Simon Wiedemann · Huan Wang · Ivan Zhang · Chong Wang · Mohammad Javad Shafiee · Rachel Manzelli · Wenbing Huang · Tassilo Klein · Lifu Zhang · Ashutosh Adhikari · Faisal Qureshi · Giuseppe Castiglione