Timezone: »

Breaking the Activation Function Bottleneck through Adaptive Parameterization
Sebastian Flennerhag · Hujun Yin · John Keane · Mark Elliot

Tue Dec 04 07:45 AM -- 09:45 AM (PST) @ Room 210 #91

Standard neural network architectures are non-linear only by virtue of a simple element-wise activation function, making them both brittle and excessively large. In this paper, we consider methods for making the feed-forward layer more flexible while preserving its basic structure. We develop simple drop-in replacements that learn to adapt their parameterization conditional on the input, thereby increasing statistical efficiency significantly. We present an adaptive LSTM that advances the state of the art for the Penn Treebank and Wikitext-2 word-modeling tasks while using fewer parameters and converging in half as many iterations.

Author Information

Sebastian Flennerhag (Alan Turing Institute)

Ph.D. candidate in Deep Learning, focusing on network adaptation in transfer learning, meta learning and sequence learning.

Hujun Yin (University of Manchester )
John Keane (University of Manchester)
Mark Elliot (University of Manchester)

More from the Same Authors