Timezone: »
Deep neural networks can approximate functions on different types of data, from images to graphs, with varied underlying structure. This underlying structure can be viewed as the geometry of the data manifold. By extending recent advances in the theoretical understanding of neural networks, we study how a randomly initialized neural network with piecewise linear activation splits the data manifold into regions where the neural network behaves as a linear function. We derive bounds on the density of boundary of linear regions and the distance to these boundaries on the data manifold. This leads to insights into the expressivity of randomly initialized deep neural networks on non-Euclidean data sets. We empirically corroborate our theoretical results using a toy supervised learning problem. Our experiments demonstrate that number of linear regions varies across manifolds and the results hold with changing neural network architectures. We further demonstrate how the complexity of linear regions is different on the low dimensional manifold of images as compared to the Euclidean space, using the MetFaces dataset.
Author Information
Saket Tiwari (Brown University)
I am a 4th Year PhD student working on the theory of DL and Deep RL. I look at DNNs from the lens of data geometry.
George Konidaris (Brown University)
More from the Same Authors
-
2021 : Bayesian Exploration for Lifelong Reinforcement Learning »
Haotian Fu · Shangqun Yu · Michael Littman · George Konidaris -
2022 Spotlight: Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex »
Charles Lovering · Jessica Forde · George Konidaris · Ellie Pavlick · Michael Littman -
2022 Poster: Evaluation beyond Task Performance: Analyzing Concepts in AlphaZero in Hex »
Charles Lovering · Jessica Forde · George Konidaris · Ellie Pavlick · Michael Littman -
2022 Poster: Model-based Lifelong Reinforcement Learning with Bayesian Exploration »
Haotian Fu · Shangqun Yu · Michael Littman · George Konidaris -
2021 : George Konidaris Talk Q&A »
George Konidaris -
2021 : Invited Talk: George Konidaris - Signal to Symbol (via Skills) »
George Konidaris -
2021 Poster: Learning Markov State Abstractions for Deep Reinforcement Learning »
Cameron Allen · Neev Parikh · Omer Gottesman · George Konidaris -
2020 : Panel Discussions »
Grace Lindsay · George Konidaris · Shakir Mohamed · Kimberly Stachenfeld · Peter Dayan · Yael Niv · Doina Precup · Catherine Hartley · Ishita Dasgupta -
2020 : Invited Talk #4 QnA - George Konidaris »
George Konidaris · Raymond Chua · Feryal Behbahani -
2020 : Invited Talk #4 George Konidaris - Signal to Symbol (via Skills) »
George Konidaris -
2017 Poster: Active Exploration for Learning Symbolic Representations »
Garrett Andersen · George Konidaris -
2017 Poster: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2017 Oral: Robust and Efficient Transfer Learning with Hidden Parameter Markov Decision Processes »
Taylor Killian · Samuel Daulton · Finale Doshi-Velez · George Konidaris -
2015 Poster: Policy Evaluation Using the Ω-Return »
Philip Thomas · Scott Niekum · Georgios Theocharous · George Konidaris -
2011 Poster: TD_gamma: Re-evaluating Complex Backups in Temporal Difference Learning »
George Konidaris · Scott Niekum · Philip Thomas -
2010 Poster: Constructing Skill Trees for Reinforcement Learning Agents from Demonstration Trajectories »
George Konidaris · Scott R Kuindersma · Andrew G Barto · Roderic A Grupen -
2009 Poster: Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining »
George Konidaris · Andrew G Barto -
2009 Spotlight: Skill Discovery in Continuous Reinforcement Learning Domains using Skill Chaining »
George Konidaris · Andrew G Barto