Poster
Untangling in Invariant Speech Recognition
Cory Stephenson · Jenelle Feather · Suchismita Padhy · Oguz Elibol · Hanlin Tang · Josh McDermott · SueYeon Chung
East Exhibition Hall B, C #241
Keywords: [ Statistical Physics of Learning ] [ Theory ] [ Neural Coding ] [ Neuroscience and Cognitive Science -> Auditory Perception; Neuroscience and Cognitive Science ]
Encouraged by the success of deep convolutional neural networks on a variety of visual tasks, much theoretical and experimental work has been aimed at understanding and interpreting how vision networks operate. At the same time, deep neural networks have also achieved impressive performance in audio processing applications, both as sub-components of larger systems and as complete end-to-end systems by themselves. Despite their empirical successes, comparatively little is understood about how these audio models accomplish these tasks.In this work, we employ a recently developed statistical mechanical theory that connects geometric properties of network representations and the separability of classes to probe how information is untangled within neural networks trained to recognize speech. We observe that speaker-specific nuisance variations are discarded by the network's hierarchy, whereas task-relevant properties such as words and phonemes are untangled in later layers. Higher level concepts such as parts-of-speech and context dependence also emerge in the later layers of the network. Finally, we find that the deep representations carry out significant temporal untangling by efficiently extracting task-relevant features at each time step of the computation. Taken together, these findings shed light on how deep auditory models process their time dependent input signals to carry out invariant speech recognition, and show how different concepts emerge through the layers of the network.
Live content is unavailable. Log in and register to view live content