Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM)

Local Geometry Constraints in V1 with Deep Recurrent Autoencoders

Jonathan Huml · Demba Ba


Abstract: The classical sparse coding model represents visual stimuli as a convex combination of a handful of learned basis functions that are Gabor-like when trained on natural image data. However, the Gabor-like filters learned by classical sparse coding far overpredict well-tuned simple cell receptive field (SCRF) profiles. The autoencoder that we use to address this problem, which maintains a natural hierarchical structure when paired with a discriminative loss, is evaluated with a weighted-$\ell_1$ (WL) penalty that encourages self-similarity of basis function usage. The weighted-$\ell_1$ constraint matches the spatial phase symmetry of recent contrastive objectives while maintaining core ideas of the sparse coding framework, yet also offers a promising path to describe the differentiation of receptive fields in terms of this discriminative hierarchy in future work.

Chat is not available.