Skip to yearly menu bar Skip to main content

Workshop: Gaze meets ML

Learning gaze control, external attention, and internal attention since 1990-91

Jürgen Schmidhuber


First I’ll discuss our early work of 1990 on attentive neural networks that learn to steer foveas, and on learning internal spotlights of attention in Transformer-like systems since 1991, then I’ll mention what happened in the subsequent 3 decades in terms of representing percepts and action plans in hierarchical neural networks, at multiple levels of abstraction, and multiple time scales. In preparation of this workshop, I made two overview web sites: 1. End-to-End Differentiable Sequential Neural Attention 1990-93 2. Learning internal spotlights of attention with what’s now called "Transformers with linearized self-attention" which are formally equivalent to the 1991 Fast Weight Programmers:

Chat is not available.