Skip to yearly menu bar Skip to main content


Poster

Soft Tensor Product Representations for Fully Continuous, Compositional Visual Representations

Bethia Sun · Maurice Pagnucco · Yang Song

East Exhibit Hall A-C #3606
[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Since the inception of the classicalist vs. connectionist debate, it has been argued that the ability to systematically combine symbol-like entities into compositional representations is crucial for human intelligence. In connectionist systems, the field of disentanglement has emerged to address this need by producing representations with explicitly separated factors of variation (FoV). By treating the overall representation as a string-like concatenation of the inferred FoVs, however, disentanglement provides a fundamentally symbolic treatment of compositional structure, one inherently at odds with the underlying continuity of deep learning vector spaces. We hypothesise that this symbolic-continuous mismatch produces broadly suboptimal performance in deep learning models that learn or use such representations. To fully align compositional representations with continuous vector spaces, we extend Smolensky's Tensor Product Representation (TPR) and propose a new type of inherently continuous compositional representation, Soft TPR, along with a theoretically-principled architecture, Soft TPR Autoencoder, designed specifically for learning Soft TPRs. In the visual representation learning domain, our Soft TPR confers broad benefits over symbolic compositional representations: state-of-the-art disentanglement and improved representation learner convergence, along with enhanced sample efficiency and superior low-sample regime performance for downstream models, empirically affirming the value of our inherently continuous compositional representation learning framework.

Live content is unavailable. Log in and register to view live content