Timezone: »

WebQA Competition + Q&A
Yingshan CHANG · Yonatan Bisk · Mridu Narang · Levi Melnick · Jianfeng Gao · Hisami Suzuki · Guihong Cao

Tue Dec 07 11:25 AM -- 11:45 AM (PST) @
Event URL: https://webqna.github.io »

WebQA is a new benchmark for multimodal multihop reasoning in which systems are presented with the same style of data as humans when searching the web: snippets and images. Upon seeing a question, the system must identify which candidates potentially inform the answer from a candidate pool. Then the system is expected to aggregate information from selected candidates with reasoning to generate an answer in natural language form. Each datum is a question paired with a series of potentially long snippets or images that serve as "knowledge carriers" over which to reason. Systems will be evaluated on both supporting fact retrieval and answer generation to measure correctness and interpretability. To demonstrate multihop multimodal reasoning ability, models should be able to 1) understand and represent knowledge from different modalities, 2) identify and aggregate relevant knowledge fragments scattered across multiple sources, 3) make inference and do natural language generation.

Author Information

Yingshan CHANG (Carnegie Mellon University)
Yonatan Bisk (Carnegie Mellon University)
Mridu Narang (Microsoft Corporation)
Levi Melnick (Microsoft)
Jianfeng Gao (Microsoft Research, Redmond, WA)
Hisami Suzuki (Microsoft Corporation)
Guihong Cao (Microsoft Corporation)

More from the Same Authors