Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Shared Visual Representations in Human and Machine Intelligence (SVRHM)

The bandwidth of perceptual awareness is constrained by specific high-level visual features

Michael Cohen · Kirsten Lydic · N Apurva Ratan Murty


Abstract:

When observers glance at a natural scene, which aspects of that scene ultimately reach perceptual awareness? To answer this question, we showed observers images of scenes that had been altered in numerous ways in the periphery (e.g., scrambling, rotating, filtering, etc.) and measured how often these different alterations were noticed in an inattentional blindness paradigm. Then, we screened a wide range of deep convolutional neural network architectures and asked which layers and features best predict the rates at which observers noticed these alterations. We found that features in the higher (but not earlier) layers predicted how often observers noticed different alterations with extremely high accuracy (at the estimated noise ceiling). Surprisingly, the model prediction accuracy was driven by a very small fraction of features that were both necessary and sufficient to predict the observed behavior, which we could easily visualize. Together these results indicate that human perceptual awareness is limited by high-level visual features that we can estimate using computational methods.

Chat is not available.