Skip to yearly menu bar Skip to main content


Poster
in
Workshop: INTERPOLATE — First Workshop on Interpolation Regularizers and Beyond

On Data Augmentation and Consistency-based Semi-supervised Relation Extraction

Komal Teru

Keywords: [ Relation Extraction ] [ consistency-training ] [ NLP ] [ Semi-Supervised Learning ] [ Manifold Mixup ]


Abstract:

To improve the sample efficiency of the Relation extraction (RE) models, semi-supervised learning (SSL) methods aim to leverage unlabelled data in addition to learning from limited labelled data points. Recently, strong data augmentation combined with consistency-based semi-supervised learning methods have advanced the state of the art in several SSL tasks. However, adapting these methods to the RE task has been challenging due to the difficulty of data augmentation for RE. In this work, we leverage the recent advances in controlled text generation to perform high-quality data augmentation for the RE task. We further introduce small but significant changes to model architecture that allows for generation of more training data by interpolating different data points in their latent space. These data augmentations along with consistency training result in very competitive results for semi-supervised relation extraction on four benchmark datasets.

Chat is not available.