Skip to yearly menu bar Skip to main content


Workshop

Scientific Methods for Understanding Neural Networks: Discovering, Validating, and Falsifying Theories of Deep Learning with Experiments

Zahra Kadkhodaie · Florentin Guth · Sanae Lotfi · Davis Brown · Micah Goldblum · Valentin De Bortoli · Andrew Saxe

Meeting 205 - 207

Sun 15 Dec, 8:15 a.m. PST

While deep learning continues to achieve impressive results on an ever-growing range of tasks, our understanding of the principles underlying these successes remains largely limited. This problem is usually tackled from a mathematical point of view, aiming to prove rigorous theorems about optimization or generalization errors of standard algorithms, but so far they have been limited to overly-simplified settings. The main goal of this workshop is to promote a complementary approach that is centered on the use of the scientific method, which forms hypotheses and designs controlled experiments to test them. More specifically, it focuses on empirical analyses of deep networks that can validate or falsify existing theories and assumptions, or answer questions about the success or failure of these models. This approach has been largely underexplored, but has great potential to further our understanding of deep learning and to lead to significant progress in both theory and practice. The secondary goal of this workshop is to build a community of researchers, currently scattered in several subfields, around the common goal of understanding deep learning through a scientific lens.

Live content is unavailable. Log in and register to view live content