Skip to yearly menu bar Skip to main content


Poster

A Neural Compositional Paradigm for Image Captioning

Bo Dai · Sanja Fidler · Dahua Lin

Room 517 AB #121

Keywords: [ Natural Language Processing ] [ Generative Models ] [ Computer Vision ]


Abstract:

Mainstream captioning models often follow a sequential structure to generate cap- tions, leading to issues such as introduction of irrelevant semantics, lack of diversity in the generated captions, and inadequate generalization performance. In this paper, we present an alternative paradigm for image captioning, which factorizes the captioning procedure into two stages: (1) extracting an explicit semantic representation from the given image; and (2) constructing the caption based on a recursive compositional procedure in a bottom-up manner. Compared to conventional ones, our paradigm better preserves the semantic content through an explicit factorization of semantics and syntax. By using the compositional generation procedure, caption construction follows a recursive structure, which naturally fits the properties of human language. Moreover, the proposed compositional procedure requires less data to train, generalizes better, and yields more diverse captions.

Live content is unavailable. Log in and register to view live content