Timezone: »

 
Poster
DeepPINK: reproducible feature selection in deep neural networks
Yang Lu · Yingying Fan · Jinchi Lv · William Stafford Noble

Tue Dec 04 07:45 AM -- 09:45 AM (PST) @ Room 210 #81

Deep learning has become increasingly popular in both supervised and unsupervised machine learning thanks to its outstanding empirical performance. However, because of their intrinsic complexity, most deep learning methods are largely treated as black box tools with little interpretability. Even though recent attempts have been made to facilitate the interpretability of deep neural networks (DNNs), existing methods are susceptible to noise and lack of robustness. Therefore, scientists are justifiably cautious about the reproducibility of the discoveries, which is often related to the interpretability of the underlying statistical models. In this paper, we describe a method to increase the interpretability and reproducibility of DNNs by incorporating the idea of feature selection with controlled error rate. By designing a new DNN architecture and integrating it with the recently proposed knockoffs framework, we perform feature selection with a controlled error rate, while maintaining high power. This new method, DeepPINK (Deep feature selection using Paired-Input Nonlinear Knockoffs), is applied to both simulated and real data sets to demonstrate its empirical utility.

Author Information

Yang Lu (University of Washington)
Yingying Fan (University of Southern California)
Jinchi Lv (University of Southern California)
William Stafford Noble (University of Washington)

More from the Same Authors