Session
Spotlights
Chris Williams
An in-silico Neural Model of Dynamic Routing through Neuronal Coherence
Devarajan Sridharan · Brian Percival · john arthur · Kwabena A Boahen
We describe a neurobiologically plausible mechanism to implement dynamic routing using the concept of neuronal communication through neuronal coherence. The model, implemented on a neuromorphic chip, incorporates a three-tier neural network architecture: the lowest tier comprises of raw input representations, the middle tier of the routing neurons, and the topmost tier of invariant output representation. The correct mapping between input and output representations is realized by an appropriate alignment of the phases of their background oscillations by the routing neurons. We demonstrate that our method is able to dramatically reduce the number of connections required from O($N^{3}$) to O($N^{2}$)
Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons
Emre Neftci · Elisabetta Chicca · Giacomo Indiveri · Jean-Jacques Slotine · Rodney J Douglas
A non--linear dynamic system is called contracting if initial conditions are forgotten exponentially fast, so that all trajectories converge to a single trajectory which is the solution of the system. We use contraction theory to derive an upper bound for the strength of recurrent connections that guarantees contraction for complex neural networks. Specifically, we apply this theory to a special class of recurrent networks which are an abstract representation of the cooperative-competitive connectivity observed in cortex and often called Cooperative Competitive Networks (CCNs). This specific type of network is believed to play a major role in shaping cortical responses and selecting the relevant signal among distractors and noise. In this paper, we analyze contraction of combined CCNs of linear threshold units and verify the results of our analysis in a hybrid analog/digital VLSI CCN comprising spiking neurons and dynamic synapses.
Extending position/phase-shift tuning to motion energy neurons improves velocity discrimination
Stanley, Yiu Man LAM · Bertram E Shi
We extend position and phase-shift tuning, concepts already well established in the disparity energy neuron literature, to motion energy neurons. We show that Reichardt-like detectors can be considered examples of position tuning, and that motion energy filters whose complex valued spatio-temporal receptive fields are space-time separable can be considered examples of phase tuning. By combining these two types of detectors, we obtain an architecture for constructing motion energy neurons whose center frequencies can be adjusted by both phase and position shifts. Similar to recently described neurons in the primary visual cortex, these new motion energy neurons exhibit tuning that is between purely space-time separable and purely speed tuned. We propose a functional role for this intermediate level of tuning by demonstrating that comparisons between pairs of these motion energy neurons can reliably discriminate between inputs whose velocities lie above or below a given reference velocity
Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes
John P Cunningham · Byron M Yu · Krishna V Shenoy · Maneesh Sahani
Neural signals present challenges to analytical efforts due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised neural signal considered to be the spike train's underlying firing rate. Current techniques to find time varying firing rates require ad hoc choices of parameters, offer no confidence intervals on their estimates, and can obscure potentially important single trial variability. We present a new method, based on a Gaussian Process prior, for inferring probabilistically optimal estimates of firing rate functions underlying single or multiple neural spike trains. We simulate spike trains to test the performance of the method and demonstrate significant average error improvement over standard smoothing techniques.
Invariant Common Spatial Patterns: Alleviating Nonstationarities in Brain-Computer Interfacing
Benjamin Blankertz · Motoaki Kawanabe · Ryota Tomioka · Friederike Hohlefeld · Vadim Nikulin · Klaus-Robert Müller
Brain-Computer Interfaces can suffer from a large variance of the subject conditions within and across sessions. For example vigilance fluctuations in the individual, variable task involvement, workload etc. alter the characteristics of EEG signals and thus challenge a stable BCI operation. In the present work we aim to define features based on a variant of the common spatial patterns (CSP) algorithm that are constructed invariant with respect to such nonstationarities. We enforce invariance properties by adding terms to the denominator of a Raleigh coefficient representation of CSP such as disturbance covariance matrices from fluctuations in visual processing. In this manner physiological prior knowledge can be used to shape the classification engine for BCI. As a proof of concept we present a BCI classifier that is robust to changes in the level of parietal alpha-activity. In other words, the EEG decoding still works when there are lapses in vigilance.
Measuring Neural Synchrony by Message Passing
Justin Dauwels · François Vialatte · Tomasz M Rutkowski · Andrzej S CICHOCKI
A novel approach to measure the interdependence of two time series is proposed, referred to as “stochastic event synchrony” (SES); it quantifies the alignment of two point processes by means of the following parameters: time delay, standard deviation of the timing jitter, the fraction of “spurious” events, and the average similarity of the events. In contrast to the other measures, SES quantifies the synchrony of oscillatory events (instead of more conventional amplitude or phase synchrony). Pairwise alignment of the point processes is cast as a statistical inference problem, which is solved by applying the max-product algorithm on a graphical model. The SES parameters are determined from the resulting pairwise alignment by maximum a posteriori (MAP) estimation. The proposed interdependence measure is applied to the problem of detecting anomalies in EEG synchrony of Mild Cognitive Impairment (MCI) patients.
Near-Maximum Entropy Models for Binary Neural Representations of Natural Images
Matthias Bethge · Philipp Berens
Maximum entropy analysis of binary variables provides an elegant way for studying the role of pairwise correlations in neural populations. Unfortunately, these approaches suffer from their poor scalability to high dimensions. In sensory coding, however, high-dimensional data is ubiquitous. Here, we introduce a new approach using a near-maximum entropy model, that makes this type of analysis feasible for very high-dimensional data---the model parameters can be derived in closed form and sampling is easy. We demonstrate its usefulness by studying a simple neural representation model of natural images. For the first time, we are able to directly compare predictions from a pairwise maximum entropy model not only in small groups of neurons, but also in larger populations of more than thousand units. Our results indicate that in such larger networks interactions exist that are not predicted by pairwise correlations, despite the fact that pairwise correlations explain the lower-dimensional marginal statistics extremely well up to the limit of dimensionality where estimation of the full joint distribution is feasible.
The core tenet of Bayesian modeling is that subjects represent beliefs as distributions over possible hypotheses. Such models have fruitfully been applied to the study of learning in the context of animal conditioning experiments (and anologously designed human learning tasks), where they explain phenomena such as retrospective revaluation that seem to demonstrate that subjects entertain multiple hypotheses simultaneously. However, a recent quantitative analysis of individual subject records by Gallistel and colleagues cast doubt on a very broad family of conditioning models by showing that all of the key features the models capture about even simple learning curves are artifacts of averaging over subjects. Rather than smooth learning curves (which Bayesian models interpret as revealing the gradual tradeoff from prior to posterior as data accumulate), subjects acquire suddenly, and their predictions continue to fluctuate abruptly. These data demand revisiting the model of the individual versus the ensemble, and also raise the worry that more sophisticated behaviors thought to support Bayesian models might also emerge artifactually from averaging over the simpler behavior of individuals. We suggest that the suddenness of changes in subjects' beliefs (as expressed in conditioned behavior) can be modeled by assuming they are conducting inference using sequential Monte Carlo sampling with a small number of samples --- one, in our simulations. Ensemble behavior resembles exact Bayesian models since, as in particle filters, it averages over many samples. Further, the model is capable of exhibiting sophisticated behaviors like retrospective revaluation at the ensemble level, even given minimally sophisticated individuals that do not track uncertainty from trial to trial. These results point to the need for more sophisticated experimental analysis to test Bayesian models, and refocus theorizing on the individual, while at the same time clarifying why the ensemble may be of interest.