Timezone: »
We present Cross-lingual Open-Retrieval Answer Generation (CORA), the first unified many-to-many question answering (QA) model that can answer questions across many languages, even for ones without language-specific annotated data or knowledge sources.We introduce a new dense passage retrieval algorithm that is trained to retrieve documents across languages for a question.Combined with a multilingual autoregressive generation model, CORA answers directly in the target language without any translation or in-language retrieval modules as used in prior work. We propose an iterative training method that automatically extends annotated data available only in high-resource languages to low-resource ones. Our results show that CORA substantially outperforms the previous state of the art on multilingual open QA benchmarks across 26 languages, 9 of which are unseen during training. Our analyses show the significance of cross-lingual retrieval and generation in many languages, particularly under low-resource settings.
Author Information
Akari Asai (University of Washington)
Xinyan Yu (Department of Computer Science, University of Washington)
Jungo Kasai (Paul G. Allen School of Computer Science & Engineering, University of Washington)
Hanna Hajishirzi (University of Washington)
More from the Same Authors
-
2021 : NaturalProofs: Mathematical Theorem Proving in Natural Language »
Sean Welleck · Jiacheng Liu · Ronan Le Bras · Hanna Hajishirzi · Yejin Choi · Kyunghyun Cho -
2021 : Scientific Language Models for Biomedical Knowledge Base Completion: An Empirical Study »
Rahul Nadkarni · David Wadden · Iz Beltagy · Noah Smith · Hanna Hajishirzi · Tom Hope -
2021 : Understanding and Knowledge Extraction from Mathematical and Scientific Text »
Hanna Hajishirzi -
2021 : NaturalProofs: Mathematical Theorem Proving in Natural Language »
Sean Welleck · Jiacheng Liu · Ronan Le Bras · Hanna Hajishirzi · Yejin Choi · Kyunghyun Cho