Timezone: »
Extraction of latent sources of complex stimuli is critical for making sense of the world. While the brain solves this blind source separation (BSS) problem continuously, its algorithms remain unknown. Previous work on biologicallyplausible BSS algorithms assumed that observed signals are linear mixtures of statistically independent or uncorrelated sources, limiting the domain of applicability of these algorithms. To overcome this limitation, we propose novel biologicallyplausible neural networks for the blind separation of potentially dependent/correlated sources. Differing from previous work, we assume some general geometric, not statistical, conditions on the source vectors allowing separation of potentially dependent/correlated sources. Concretely, we assume that the source vectors are sufficiently scattered in their domains which can be described by certain polytopes. Then, we consider recovery of these sources by the DetMax criterion, which maximizes the determinant of the output correlation matrix to enforce a similar spread for the source estimates. Starting from this normative principle, and using a weighted similarity matching approach that enables arbitrary linear transformations adaptable by local learning rules, we derive twolayer biologicallyplausible neural network algorithms that can separate mixtures into sources coming from a variety of source domains. We demonstrate that our algorithms outperform other biologicallyplausible BSS algorithms on correlated source separation problems.
Author Information
Bariscan Bozkurt (Koc University)
# Education * M.Sc. Student, ElectricalElectronics Engineering, Koc University (2021) * B.Sc. ElectricalElectronics Engineering, Koc University (20182021) * B.A. Mathematics, Koc University (20152021) # Research * Koc University Advanced Signal Processing and Communication Group (February 2021) * Koc UniversityIs Bank Artificial Intelligence Center (September 2021) # Business Experience * Machine Learning Engineer (Online), Hospital on Mobile, Silicon Valley (March 2021September 2021) * Machine Learning Engineering Intern, P.I. Works Inc./Applied Research, Istanbul/TURKEY (July 2020  November 2020) * VHDL Design  Summer Internship, ASELSAN A.Ş., Ankara/TURKEY (July 2020  August 2020)
Cengiz Pehlevan (Harvard University)
Alper Erdogan (Koç University)
Alper T. Erdogan (Senior Member, IEEE) was born in Ankara, Turkey, in 1971. He received the B.S. degree from the Middle East Technical University, Ankara, Turkey, in 1993, and the M.S. and Ph.D. degrees from Stanford University, Stanford, CA, USA, in 1995 and 1999, respectively. He was a Principal Research Engineer with the GlobespanVirata Corporation (formerly Excess Bandwidth and Virata Corporations) from September 1999 to November 2001. He joined the Electrical and Electronics Engineering Department, Koc University, Istanbul, Turkey, in January 2002, where he is currently a Professor. His research interests include adaptive signal processing, machine learning, physical layer communications, computational neuroscience, optimization, system theory and control, and information theory. Dr. Erdogan was the recipient of several awards including TUBITAK Career Award (2005), Werner Von Siemens Excellence Award (2007), TUBA GEBIP Outstanding Young Scientist Award (2008), TUBITAK Encouragement Award (2010), and Outstanding Teaching Award (2017). He was an Associate Editor for the IEEE Transactions on Signal Processing, and he was a member of IEEE Signal Processing Theory and Methods Technical Committee.
More from the Same Authors

2021 Spotlight: Exact marginal prior distributions of finite Bayesian neural networks »
Jacob ZavatoneVeth · Cengiz Pehlevan 
2022 : Contrasting random and learned features in deep Bayesian linear regression »
Jacob ZavatoneVeth · William Tong · Cengiz Pehlevan 
2022 : Dynamical Mean Field Theory of Kernel Evolution in Wide Neural Networks »
Blake Bordelon · Cengiz Pehlevan 
2022 : Capacity of Groupinvariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views? »
Matthew Farrell · Blake Bordelon · Shubhendu Trivedi · Cengiz Pehlevan 
2022 : Training shapes the curvature of shallow neural network representations »
Jacob ZavatoneVeth · Julian Rubinfien · Cengiz Pehlevan 
2023 Poster: Learning Curves for Heterogeneous FeatureSubsampled Ridge Ensembles »
Ben Ruben · Cengiz Pehlevan 
2023 Poster: Learning Curves for Deep Structured Gaussian Feature Models »
Jacob ZavatoneVeth · Cengiz Pehlevan 
2023 Poster: Dynamics of Finite Width Kernel and Prediction Fluctuations in Mean Field Neural Networks »
Blake Bordelon · Cengiz Pehlevan 
2023 Poster: Long Sequence Hopfield Memory »
Hamza Chaudhry · Jacob ZavatoneVeth · Dmitry Krotov · Cengiz Pehlevan 
2023 Poster: Dynamics of Temporal Difference Reinforcement Learning »
Blake Bordelon · Paul Masset · Henry Kuo · Cengiz Pehlevan 
2023 Poster: Correlative Information Maximization: A Biologically Plausible Approach to Supervised Deep Neural Networks without Weight Symmetry »
Bariscan Bozkurt · Cengiz Pehlevan · Alper Erdogan 
2023 Poster: FeatureLearning Networks Are Consistent Across Widths At Realistic Scales »
Nikhil Vyas · Alexander Atanasov · Blake Bordelon · Depen Morwani · Sabarish Sainathan · Cengiz Pehlevan 
2023 Poster: Neural Circuits for Fast Poisson Compressed Sensing in the Olfactory Bulb »
Jacob ZavatoneVeth · Paul Masset · William Tong · Joseph Zak · Venkatesh Murthy · Cengiz Pehlevan 
2023 Workshop: Associative Memory & Hopfield Networks in 2023 »
Parikshit Ram · Hilde Kuehne · Daniel Lee · Cengiz Pehlevan · Mohammed Zaki · Lenka Zdeborová 
2022 Panel: Panel 3C5: BiologicallyPlausible Determinant Maximization… & What's the Harm? ... »
Bariscan Bozkurt · Nathan Kallus 
2022 : Capacity of Groupinvariant Linear Readouts from Equivariant Representations: How Many Objects can be Linearly Classified Under All Possible Views? »
Matthew Farrell · Blake Bordelon · Shubhendu Trivedi · Cengiz Pehlevan 
2022 Poster: SelfSupervised Learning with an Information Maximization Criterion »
Serdar Ozsoy · Shadi Hamdan · Sercan Arik · Deniz Yuret · Alper Erdogan 
2022 Poster: SelfConsistent Dynamical Field Theory of Kernel Evolution in Wide Neural Networks »
Blake Bordelon · Cengiz Pehlevan 
2022 Poster: Natural gradient enables fast sampling in spiking neural networks »
Paul Masset · Jacob ZavatoneVeth · J. Patrick Connor · Venkatesh Murthy · Cengiz Pehlevan 
2021 Poster: Asymptotics of representation learning in finite Bayesian neural networks »
Jacob ZavatoneVeth · Abdulkadir Canatar · Ben Ruben · Cengiz Pehlevan 
2021 Poster: Exact marginal prior distributions of finite Bayesian neural networks »
Jacob ZavatoneVeth · Cengiz Pehlevan 
2021 Poster: OutofDistribution Generalization in Kernel Regression »
Abdulkadir Canatar · Blake Bordelon · Cengiz Pehlevan 
2021 Poster: Attention Approximates Sparse Distributed Memory »
Trenton Bricken · Cengiz Pehlevan 
2020 Poster: Minimax Dynamics of Optimally Balanced Spiking Networks of Excitatory and Inhibitory Neurons »
Qianyi Li · Cengiz Pehlevan 
2019 Poster: Structured and Deep Similarity Matching via Structured and Deep Hebbian Networks »
Dina Obeid · Hugo Ramambason · Cengiz Pehlevan 
2018 Poster: Manifoldtiling Localized Receptive Fields are Optimal in Similaritypreserving Neural Networks »
Anirvan Sengupta · Cengiz Pehlevan · Mariano Tepper · Alexander Genkin · Dmitri Chklovskii 
2015 Poster: A Normative Theory of Adaptive Dimensionality Reduction in Neural Networks »
Cengiz Pehlevan · Dmitri Chklovskii