Skip to yearly menu bar Skip to main content


Tutorial

Deep Learning in Natural Language Processing

Ronan Collobert · Jason E Weston

Regency E/F

Abstract:

This tutorial will describe recent advances in deep learning techniques for Natural Language Processing (NLP). Traditional NLP approaches favour shallow systems, possibly cascaded, with adequate hand-crafted features. In constrast, we are interested in end-to-end architectures: these systems include several feature layers, with increasing abstraction at each layer. Compared to shallow systems, these feature layers are learnt for the task of interest, and do not require any engineering. We will show how neural networks are naturally well suited for end-to-end learning in NLP tasks. We will study multi-tasking different tasks, new semi-supervised learning techniques adapted to these deep architectures, and review end-to-end structured output learning. Finally, we will highlight how some of these advances can be applied to other fields of research, like computer vision, as well.

Chat is not available.