Skip to yearly menu bar Skip to main content


Select-and-Sample for Spike-and-Slab Sparse Coding

Abdul-Saboor Sheikh · Jörg Lücke

Area 5+6+7+8 #123

Keywords: [ (Other) Unsupervised Learning Methods ] [ (Other) Probabilistic Models and Methods ] [ Sparsity and Feature Selection ] [ Variational Inference ] [ (Other) Neuroscience ] [ (Cognitive/Neuroscience) Neural Coding ]


Probabilistic inference serves as a popular model for neural processing. It is still unclear, however, how approximate probabilistic inference can be accurate and scalable to very high-dimensional continuous latent spaces. Especially as typical posteriors for sensory data can be expected to exhibit complex latent dependencies including multiple modes. Here, we study an approach that can efficiently be scaled while maintaining a richly structured posterior approximation under these conditions. As example model we use spike-and-slab sparse coding for V1 processing, and combine latent subspace selection with Gibbs sampling (select-and-sample). Unlike factored variational approaches, the method can maintain large numbers of posterior modes and complex latent dependencies. Unlike pure sampling, the method is scalable to very high-dimensional latent spaces. Among all sparse coding approaches with non-trivial posterior approximations (MAP or ICA-like models), we report the largest-scale results. In applications we firstly verify the approach by showing competitiveness in standard denoising benchmarks. Secondly, we use its scalability to, for the first time, study highly-overcomplete settings for V1 encoding using sophisticated posterior representations. More generally, our study shows that very accurate probabilistic inference for multi-modal posteriors with complex dependencies is tractable, functionally desirable and consistent with models for neural inference.

Live content is unavailable. Log in and register to view live content