Skip to yearly menu bar Skip to main content


Poster

A Similarity-preserving Network Trained on Transformed Images Recapitulates Salient Features of the Fly Motion Detection Circuit

Yanis Bahroun · Dmitri Chklovskii · Anirvan Sengupta

East Exhibition Hall B + C #188

Keywords: [ Plasticity and Adaptation; Neuroscien ] [ Neuroscience and Cognitive Science -> Neuroscience; Neuroscience and Cognitive Science ] [ Neuroscience and cognitive science ]


Abstract:

Learning to detect content-independent transformations from data is one of the central problems in biological and artificial intelligence. An example of such problem is unsupervised learning of a visual motion detector from pairs of consecutive video frames. Rao and Ruderman formulated this problem in terms of learning infinitesimal transformation operators (Lie group generators) via minimizing image reconstruction error. Unfortunately, it is difficult to map their model onto a biologically plausible neural network (NN) with local learning rules. Here we propose a biologically plausible model of motion detection. We also adopt the transformation-operator approach but, instead of reconstruction-error minimization, start with a similarity-preserving objective function. An online algorithm that optimizes such an objective function naturally maps onto an NN with biologically plausible learning rules. The trained NN recapitulates major features of the well-studied motion detector in the fly. In particular, it is consistent with the experimental observation that local motion detectors combine information from at least three adjacent pixels, something that contradicts the celebrated Hassenstein-Reichardt model.

Live content is unavailable. Log in and register to view live content