Skip to yearly menu bar Skip to main content


Poster

Generating Informative and Diverse Conversational Responses via Adversarial Information Maximization

Yizhe Zhang · Michel Galley · Jianfeng Gao · Zhe Gan · Xiujun Li · Chris Brockett · Bill Dolan

Room 210 #94

Keywords: [ Dialog- or Communication-Based Learning ] [ Natural Language Processing ]


Abstract:

Responses generated by neural conversational models tend to lack informativeness and diversity. We present Adversarial Information Maximization (AIM), an adversarial learning framework that addresses these two related but distinct problems. To foster response diversity, we leverage adversarial training that allows distributional matching of synthetic and real responses. To improve informativeness, our framework explicitly optimizes a variational lower bound on pairwise mutual information between query and response. Empirical results from automatic and human evaluations demonstrate that our methods significantly boost informativeness and diversity.

Live content is unavailable. Log in and register to view live content