Timezone: »

 
Poster
Neural Networks Fail to Learn Periodic Functions and How to Fix It
Liu Ziyin · Tilman Hartwig · Masahito Ueda

Thu Dec 10 09:00 PM -- 11:00 PM (PST) @ Poster Session 6 #1870
Previous literature offers limited clues on how to learn a periodic function using modern neural networks. We start with a study of the extrapolation properties of neural networks; we prove and demonstrate experimentally that the standard activations functions, such as ReLU, tanh, sigmoid, along with their variants, all fail to learn to extrapolate simple periodic functions. We hypothesize that this is due to their lack of a ``periodic" inductive bias. As a fix of this problem, we propose a new activation, namely, $x + \sin^2(x)$, which achieves the desired periodic inductive bias to learn a periodic function while maintaining a favorable optimization property of the $\relu$-based activations. Experimentally, we apply the proposed method to temperature and financial data prediction.

Author Information

Liu Ziyin (University of Tokyo)
Tilman Hartwig (University of Tokyo)
Masahito Ueda (University of Tokyo)

More from the Same Authors