Timezone: »
Deep learning has exhibited superior performance for various tasks, especially for high-dimensional datasets, such as images. To understand this property, we investigate the approximation and estimation ability of deep learning on {\it anisotropic Besov spaces}.The anisotropic Besov space is characterized by direction-dependent smoothness and includes several function classes that have been investigated thus far.We demonstrate that the approximation error and estimation error of deep learning only depend on the average value of the smoothness parameters in all directions. Consequently, the curse of dimensionality can be avoided if the smoothness of the target function is highly anisotropic.Unlike existing studies, our analysis does not require a low-dimensional structure of the input data.We also investigate the minimax optimality of deep learning and compare its performance with that of the kernel method (more generally, linear estimators).The results show that deep learning has better dependence on the input dimensionality if the target function possesses anisotropic smoothness, and it achieves an adaptive rate for functions with spatially inhomogeneous smoothness.
Author Information
Taiji Suzuki (The University of Tokyo/RIKEN-AIP)
Atsushi Nitanda (Kyushu Institute of Technology / RIKEN)
Related Events (a corresponding poster, oral, or spotlight)
-
2021 Poster: Deep learning is adaptive to intrinsic dimensionality of model smoothness in anisotropic Besov space »
Wed. Dec 8th 08:30 -- 10:00 AM Room
More from the Same Authors
-
2022 Poster: Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning »
Tomoya Murata · Taiji Suzuki -
2022 : Reducing Communication in Nonconvex Federated Learning with a Novel Single-Loop Variance Reduction Method »
Kazusato Oko · Shunta Akiyama · Tomoya Murata · Taiji Suzuki -
2023 Poster: Feature learning via mean-field Langevin dynamics: classifying sparse parities and beyond »
Taiji Suzuki · Denny Wu · Kazusato Oko · Atsushi Nitanda -
2023 Poster: Learning in the Presence of Low-dimensional Structure: A Spiked Random Matrix Perspective »
Jimmy Ba · Murat Erdogdu · Taiji Suzuki · Zhichao Wang · Denny Wu -
2023 Poster: Gradient-Based Feature Learning under Structured Data »
Alireza Mousavi-Hosseini · Denny Wu · Taiji Suzuki · Murat Erdogdu -
2023 Poster: Mean-field Langevin dynamics: Time-space discretization, stochastic gradient, and variance reduction »
Taiji Suzuki · Denny Wu · Atsushi Nitanda -
2022 Spotlight: Lightning Talks 4A-2 »
Barakeel Fanseu Kamhoua · Hualin Zhang · Taiki Miyagawa · Tomoya Murata · Xin Lyu · Yan Dai · Elena Grigorescu · Zhipeng Tu · Lijun Zhang · Taiji Suzuki · Wei Jiang · Haipeng Luo · Lin Zhang · Xi Wang · Young-San Lin · Huan Xiong · Liyu Chen · Bin Gu · Jinfeng Yi · Yongqiang Chen · Sandeep Silwal · Yiguang Hong · Maoyuan Song · Lei Wang · Tianbao Yang · Han Yang · MA Kaili · Samson Zhou · Deming Yuan · Bo Han · Guodong Shi · Bo Li · James Cheng -
2022 Spotlight: Escaping Saddle Points with Bias-Variance Reduced Local Perturbed SGD for Communication Efficient Nonconvex Distributed Learning »
Tomoya Murata · Taiji Suzuki -
2022 Poster: High-dimensional Asymptotics of Feature Learning: How One Gradient Step Improves the Representation »
Jimmy Ba · Murat Erdogdu · Taiji Suzuki · Zhichao Wang · Denny Wu · Greg Yang -
2022 Poster: Two-layer neural network on infinite dimensional data: global optimization guarantee in the mean-field regime »
Naoki Nishikawa · Taiji Suzuki · Atsushi Nitanda · Denny Wu -
2022 Poster: Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with Variance Reduction and its Application to Optimization »
Yuri Kinoshita · Taiji Suzuki -
2021 Poster: Differentiable Multiple Shooting Layers »
Stefano Massaroli · Michael Poli · Sho Sonoda · Taiji Suzuki · Jinkyoo Park · Atsushi Yamashita · Hajime Asama -
2021 Poster: Particle Dual Averaging: Optimization of Mean Field Neural Network with Global Convergence Rate Analysis »
Atsushi Nitanda · Denny Wu · Taiji Suzuki -
2021 Poster: Generalization Bounds for Graph Embedding Using Negative Sampling: Linear vs Hyperbolic »
Atsushi Suzuki · Atsushi Nitanda · jing wang · Linchuan Xu · Kenji Yamanishi · Marc Cavazza -
2019 Poster: Data Cleansing for Models Trained with SGD »
Satoshi Hara · Atsushi Nitanda · Takanori Maehara -
2017 Poster: Doubly Accelerated Stochastic Variance Reduced Dual Averaging Method for Regularized Empirical Risk Minimization »
Tomoya Murata · Taiji Suzuki -
2017 Poster: Trimmed Density Ratio Estimation »
Song Liu · Akiko Takeda · Taiji Suzuki · Kenji Fukumizu -
2014 Poster: Stochastic Proximal Gradient Descent with Acceleration Techniques »
Atsushi Nitanda