Timezone: »

Neural Networks with Recurrent Generative Feedback
Yujia Huang · James Gornet · Sihui Dai · Zhiding Yu · Tan Nguyen · Doris Tsao · Anima Anandkumar

Wed Dec 09 09:00 AM -- 11:00 AM (PST) @ Poster Session 3 #937

Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.

Author Information

Yujia Huang (Caltech)
James Gornet (Caltech)

While molecular and genetic mechanisms involved in many neurological phenomena—such as learning and memory—have been largely uncovered, a thorough, comprehensive theory of how these mechanisms interact to create specific brain functions—such as vision and spatial navigation—have been elusive. This can be partially attributed to the lack of technology necessary to study and understand how microscopic aspects such as synaptic dynamics and neuronal organization impact macroscopic functions. Nevertheless, an understanding of these underlying mechanisms is becoming increasingly important for understanding the nervous system and designing targeted treatments for diseases such as depression, Alzheimer’s disease, and Parkinson’s disease. While single-neuron models have been elucidated, a large-scale perspective may be required to understand certain aspects of neural dynamics and oscillations. James Gornet is interested in using his background in biomedical engineering and chemistry to gain a deeper understanding of how macroscopic patterns in neural activity and organization can be explained by fundamental relationships between microscopic neuronal parameters such as synaptic dynamics, neuronal morphology, and genetic expression. More specifically, James Gornet draws from experience in machine learning, chemical physics, and molecular biology to tackle these challenges.

Sihui Dai (California Institute of Technology)
Zhiding Yu (NVIDIA)
Tan Nguyen (Rice University/UCLA)

I am currently a postdoctoral scholar in the Department of Mathematics at the University of California, Los Angeles, working with Dr. Stanley J. Osher. I have obtained my Ph.D. in Machine Learning from Rice University, where I was advised by Dr. Richard G. Baraniuk. My research is focused on the intersection of Deep Learning, Probabilistic Modeling, Optimization, and ODEs/PDEs. I gave an invited talk in the Deep Learning Theory Workshop at NeurIPS 2018 and organized the 1st Workshop on Integration of Deep Neural Models and Differential Equations at ICLR 2020. I also had two awesome long internships with Amazon AI and NVIDIA Research, during which he worked with Dr. Anima Anandkumar. I am the recipient of the prestigious Computing Innovation Postdoctoral Fellowship (CIFellows) from the Computing Research Association (CRA), the NSF Graduate Research Fellowship, and the IGERT Neuroengineering Traineeship. I received his MSEE and BSEE from Rice in May 2018 and May 2014, respectively.

Ying Tsao (Caltech)
Anima Anandkumar (NVIDIA / Caltech)

Anima Anandkumar is a Bren professor at Caltech CMS department and a director of machine learning research at NVIDIA. Her research spans both theoretical and practical aspects of large-scale machine learning. In particular, she has spearheaded research in tensor-algebraic methods, non-convex optimization, probabilistic models and deep learning. Anima is the recipient of several awards and honors such as the Bren named chair professorship at Caltech, Alfred. P. Sloan Fellowship, Young investigator awards from the Air Force and Army research offices, Faculty fellowships from Microsoft, Google and Adobe, and several best paper awards. Anima received her B.Tech in Electrical Engineering from IIT Madras in 2004 and her PhD from Cornell University in 2009. She was a postdoctoral researcher at MIT from 2009 to 2010, a visiting researcher at Microsoft Research New England in 2012 and 2014, an assistant professor at U.C. Irvine between 2010 and 2016, an associate professor at U.C. Irvine between 2016 and 2017 and a principal scientist at Amazon Web Services between 2016 and 2018.

More from the Same Authors