Skip to yearly menu bar Skip to main content


Session

Session 8: Neuroscience I

Alan A Stocker

Abstract:
Chat is not available.

Thu 6 Dec. 8:30 - 8:50 PST

Inferring Elapsed Time from Stochastic Neural Processes

Misha B Ahrens · Maneesh Sahani

Many perceptual processes and neural computations, such as speech recognition, motor control and learning, depend on the ability to measure and mark the passage of time. However, the neural mechanisms that make such temporal judgements possible are unknown. A number of different hypotheses have been advanced, all of which depend on the known evolution of a neural or psychological state, possibly through oscillations or the gradual decay of a memory trace. We suggest a new model, which instead exploits the fact that neural and sensory processes, even when their precise evolution is unpredictable, exhibit statistically structured changes. We show that this structure can be exploited for timing, and that reliable timing estimators can be derived from the statistics of the processes. This framework of decoding time from stochastic processes allows for a much wider array of neural implementations of time estimation than has been considered so far, and can simultaneously emulate several different behavioral findings, which so far have only been understood in psychological terms.

Thu 6 Dec. 8:50 - 9:10 PST

A neural network implementing optimal state estimation based on dynamic spike train decoding

Omer Bobrowski · Ron Meir · Shy Shoham · Yonina Eldar

It is becoming increasingly evident that organisms acting in uncertain dynamical environments often employ exact or approximate Bayesian statistical calculations in order to continuously estimate the environmental state, integrate information from multiple sensory modalities, form predictions and choose actions. What is less clear is how these putative computations are implemented by cortical neural networks. An additional level of complexity is introduced because these networks observe the world through spike trains received from primary sensory afferents, rather than directly. A recent line of research has described mechanisms by which such computations can be implemented using a network of neurons whose activity directly represents a probability distribution across the possible ``world states''. Much of this work, however, uses various approximations, which severely restrict the domain of applicability of these implementations. Here we make use of rigorous mathematical results from the theory of continuous time point process filtering, and show how optimal real-time state estimation and prediction may be implemented in a general setting using linear neural networks. We demonstrate the applicability of the approach with several examples, and relate the required network properties to the statistical nature of the environment, thereby quantifying the compatibility of a given network with its environment.

Thu 6 Dec. 9:10 - 9:30 PST

Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Sebastian Gerwinn · Jakob H Macke · Matthias Seeger · Matthias Bethge

Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response.

Thu 6 Dec. 9:30 - 9:50 PST

Neural characterization in partially observed populations of spiking neurons

Jonathan W Pillow · Peter E Latham

Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to responses of neurons in the early sensory pathway, they have fared less well as a models of responses in deeper brain areas, as they do not easily take into account multiple stages of processing. Here we introduce a new twist on this approach: we include unobserved as well as observed spike trains. This provides us with a more powerful model, and thus more flexibility in fitting data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and so should give insight into how networks process sensory input. We demonstrate the model on a simple toy network consisting of two neurons. The formalism, based on variational EM, can be easily extended to larger networks.

Thu 6 Dec. 9:50 - 10:10 PST

Learning to classify complex patterns using a VLSI network of spiking neurons

Srinjoy Mitra · Giacomo Indiveri · Stefano Fusi

Real time classification of complex patterns of trains of spikes is a difficult and important computational problem. Here we propose a compact, low power, fully analog neuromorphic device which can learn to classify complex patterns of mean firing rates. The chip implements a network of integrate-and-fire neurons connected by bistable plastic synapses. Learning is supervised by a teacher which simply provides an extra input to the output neurons during training. The synapses are modified only as long as the current generated by the plastic synapses does not match the output desired by the teacher (as in the perceptron learning rule). Our device has been designed to be able to learn linearly separable patterns and we show in a series of tests that it can classify uncorrelated random spatial patterns of mean firing rates.