Skip to yearly menu bar Skip to main content


Poster

Integrated perception with recurrent multi-task neural networks

Hakan Bilen · Andrea Vedaldi

Area 5+6+7+8 #155

Keywords: [ Multi-task and Transfer Learning ] [ (Application) Object and Pattern Recognition ] [ (Application) Computer Vision ] [ Deep Learning or Neural Networks ]


Abstract:

Modern discriminative predictors have been shown to match natural intelligences in specific perceptual tasks in image classification, object and part detection, boundary extraction, etc. However, a major advantage that natural intelligences still have is that they work well for all perceptual problems together, solving them efficiently and coherently in an integrated manner. In order to capture some of these advantages in machine perception, we ask two questions: whether deep neural networks can learn universal image representations, useful not only for a single task but for all of them, and how the solutions to the different tasks can be integrated in this framework. We answer by proposing a new architecture, which we call multinet, in which not only deep image features are shared between tasks, but where tasks can interact in a recurrent manner by encoding the results of their analysis in a common shared representation of the data. In this manner, we show that the performance of individual tasks in standard benchmarks can be improved first by sharing features between them and then, more significantly, by integrating their solutions in the common representation.

Live content is unavailable. Log in and register to view live content