Skip to yearly menu bar Skip to main content


Poster

Gradient Descent for Spiking Neural Networks

Dongsung Huh · Terrence Sejnowski

Room 210 #69

Keywords: [ Spike Train Generation ] [ Neural Coding ] [ Biologically Plausible Deep Networks ] [ Supervised Deep Networks ] [ Recurrent Networks ]


Abstract:

Most large-scale network models use neurons with static nonlinearities that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking neural networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking dynamics and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (~ millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory task over extended duration (~ second). The results show that the gradient descent approach indeed optimizes networks dynamics on the time scale of individual spikes as well as on behavioral time scales. In conclusion, our method yields a general purpose supervised learning algorithm for spiking neural networks, which can facilitate further investigations on spike-based computations.

Live content is unavailable. Log in and register to view live content