Skip to yearly menu bar Skip to main content


Poster

Mapping Images to Scene Graphs with Permutation-Invariant Structured Prediction

Roei Herzig · Moshiko Raboh · Gal Chechik · Jonathan Berant · Amir Globerson

Room 517 AB #140

Keywords: [ Visual Scene Analysis and Interpretation ] [ Structured Prediction ]


Abstract:

Machine understanding of complex images is a key goal of artificial intelligence. One challenge underlying this task is that visual scenes contain multiple inter-related objects, and that global context plays an important role in interpreting the scene. A natural modeling framework for capturing such effects is structured prediction, which optimizes over complex labels, while modeling within-label interactions. However, it is unclear what principles should guide the design of a structured prediction model that utilizes the power of deep learning components. Here we propose a design principle for such architectures that follows from a natural requirement of permutation invariance. We prove a necessary and sufficient characterization for architectures that follow this invariance, and discuss its implication on model design. Finally, we show that the resulting model achieves new state of the art results on the Visual Genome scene graph labeling benchmark, outperforming all recent approaches.

Live content is unavailable. Log in and register to view live content