Skip to yearly menu bar Skip to main content


Poster

Learning long-range spatial dependencies with horizontal gated recurrent units

Drew Linsley · Junkyung Kim · Vijay Veerabadran · Charles Windolf · Thomas Serre

Room 210 #82

Keywords: [ CNN Architectures ] [ Visual Perception ] [ Cognitive Science ] [ Neuroscience ] [ Biologically Plausible Deep Networks ] [ Perception ] [ Computer Vision ] [ Deep Autoencoders ] [ Supervised Deep Networks ] [ Recurrent Networks ]


Abstract:

Progress in deep learning has spawned great successes in many engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural networks, are now approaching -- and sometimes even surpassing -- human accuracy on a variety of visual recognition tasks. Here, however, we show that these neural networks and their recent extensions struggle in recognition tasks where co-dependent visual features must be detected over long spatial ranges. We introduce a visual challenge, Pathfinder, and describe a novel recurrent neural network architecture called the horizontal gated recurrent unit (hGRU) to learn intrinsic horizontal connections -- both within and across feature columns. We demonstrate that a single hGRU layer matches or outperforms all tested feedforward hierarchical baselines including state-of-the-art architectures with orders of magnitude more parameters.

Live content is unavailable. Log in and register to view live content