Timezone: »
The superiority of neural networks over classical linear classifiers stems from their ability to slice image space into complex class regions. While neural network training is certainly not well understood, existing theories of neural network training primarily focus on understanding the geometry of loss landscapes. Meanwhile, considerably less is known about the geometry of class boundaries. The geometry of these regions depends strongly on the inductive bias of neural network models, which we do not currently have the tools to analyze rigorously. In this study, we use empirical tools to study the geometry of class regions and try to answer the question - Do neural networks produce decision boundaries that are consistent across random initializations? Do different neural architectures have measurable differences in inductive bias?
Author Information
Gowthami Somepalli (University of Maryland, College Park)
More from the Same Authors
-
2022 : Investigating Reproducibility from the Decision Boundary Perspective. »
Gowthami Somepalli · Arpit Bansal · Liam Fowl · Ping-yeh Chiang · Yehuda Dar · Richard Baraniuk · Micah Goldblum · Tom Goldstein -
2022 : SAINT: Improved Neural Networks for Tabular Data via Row Attention and Contrastive Pre-Training »
Gowthami Somepalli · Avi Schwarzschild · Micah Goldblum · C. Bayan Bruss · Tom Goldstein -
2021 Poster: PatchGame: Learning to Signal Mid-level Patches in Referential Games »
Kamal Gupta · Gowthami Somepalli · Anubhav Anubhav · Vinoj Yasanga Jayasundara Magalle Hewa · Matthias Zwicker · Abhinav Shrivastava