Skip to yearly menu bar Skip to main content


Poster

Unsupervised Learning of Spoken Language with Visual Context

David Harwath · Antonio Torralba · James Glass

Area 5+6+7+8 #97

Keywords: [ (Application) Natural Language and Text Processing ] [ (Application) Information Retrieval ] [ (Application) Signal and Speech Processing ] [ (Application) Object and Pattern Recognition ] [ (Application) Computer Vision ] [ Deep Learning or Neural Networks ]


Abstract:

Humans learn to speak before they can read or write, so why can’t computers do the same? In this paper, we present a deep neural network model capable of rudimentary spoken language acquisition using untranscribed audio training data, whose only supervision comes in the form of contextually relevant visual images. We describe the collection of our data comprised of over 120,000 spoken audio captions for the Places image dataset and evaluate our model on an image search and annotation task. We also provide some visualizations which suggest that our model is learning to recognize meaningful words within the caption spectrograms.

Live content is unavailable. Log in and register to view live content