Timezone: »

 
Workshop
Let's Discuss: Learning Methods for Dialogue
Hal Daumé III · Paul Mineiro · Amanda Stent · Jason E Weston

Fri Dec 09 11:00 PM -- 09:30 AM (PST) @ Hilton Diag. Mar, Blrm. C
Event URL: http://letsdiscussnips2016.weebly.com/ »

Humans conversing naturally with machines is a staple of science fiction. Building agents capable of mutually coordinating their states and actions via communication, in conjunction with human agents, would be one of the Average engineering feats of human history. In addition to the tremendous economic potential of this technology, the ability to converse appears intimately related to the overall goal of AI.

Although dialogue has been an active area within the linguistics and NLP communities for decades, the wave of optimism in the machine learning community has inspired increased interest from researchers, companies, and foundations. The NLP community has enthusiastically embraced and innovated neural information processing systems, resulting in substantial relevant activity published outside of NIPS. A forum for increased interaction (dialogue!) with these communities at NIPS will accelerate creativity and progress.

We plan to focus on the following issues:

1. How to be data-driven
a. What are tractable and useful intermediate tasks on the path to truly conversant machines? How can we leverage existing benchmark tasks and competitions? What design criteria would we like to see for the next set of benchmark tasks and competitions?
b. How do we assess performance? What can and cannot be done with offline evaluation on fixed data sets? How can we facilitate development of these offline evaluation tasks in the public domain? What is the role of online evaluation as a benchmark, and how would we make it accessible to the general community? Is there a role for simulated environments, or tasks where machines communicate solely with each other?
2. How to build applications
a. What unexpected problem aspects arise in situated systems? human-hybrid systems? systems learning from adversarial inputs?
b. Can we divide and conquer? Do we need to a irreducible end-to-end system, or can we define modules with abstractions that do not leak?
c. How do we ease the burden on the human designer of specifying or bootstrapping the system?
3. Architectural and algorithmic innovation
a. What are the associated requisite capabilities for learning architectures, and where are the deficiencies in our current architectures? How can we leverage recent advances in reasoning, attention, and memory architectures? How can we beneficially incorporate linguistic knowledge into our architectures?
b. How far can we get with current optimization techniques? To learn requisite competencies, do we need advances in discrete optimization? curriculum learning? (inverse) reinforcement learning?

Fri 11:20 p.m. - 11:30 p.m. [iCal]
Opening (Overview)
Fri 11:25 p.m. - 11:25 p.m. [iCal]

This set of talks and panel session is organized around the theme of building end-to-end dialog systems.

Fri 11:30 p.m. - 12:10 a.m. [iCal]
Evolvable Dialogue Systems (Invited Talk)
Milica Gasic
Sat 12:10 a.m. - 12:50 a.m. [iCal]
The Missing Pieces for a Full-Fledged Dialog Agent (Invited Talk)
Baroni Marco
Sat 12:50 a.m. - 1:30 a.m. [iCal]
Authoring End-to-End Dialog Systems (Invited Talk) Jason Williams
Sat 1:30 a.m. - 2:00 a.m. [iCal]

Workshop coffee break.

Sat 2:00 a.m. - 2:20 a.m. [iCal]
Panel Session 1 (Panel Session)
Sat 2:20 a.m. - 4:00 a.m. [iCal]
Lunch (Break)
Sat 3:55 a.m. - 3:55 a.m. [iCal]

This set of talks and panel session is organized around the theme of leveraging linguistics to build, improve, and understand dialog systems.

Sat 4:00 a.m. - 4:40 a.m. [iCal]
Coordination and Learning in Human Dialogue (Invited Talk)
Raquel Fernández
Sat 4:40 a.m. - 5:20 a.m. [iCal]
Domain Adaptation using Linguistic Knowledge (Invited Talk)
Nina Dethlefs
Sat 5:20 a.m. - 5:40 a.m. [iCal]
Bootstrapping Incremental Dialogue Systems: Using Linguistic Knowledge to Learn from Minimal Data (Contributed Talk) Dimitris Kalatzis, Arash Eshghi
Sat 5:40 a.m. - 6:00 a.m. [iCal]
Multi-Agent Communication and the Emergence of (Natural) Language (Contributed Talk)
Angeliki Lazaridou
Sat 6:00 a.m. - 6:30 a.m. [iCal]

Workshop coffee break.

Sat 6:30 a.m. - 6:50 a.m. [iCal]
Panel Session 2 (Panel Session)
Sat 6:45 a.m. - 6:45 a.m. [iCal]

This set of talks and panel session is organized around the theme of modeling dialogue using machine learning techniques: in particular, what architectures to use, and how to evaluate performance.

Sat 6:50 a.m. - 7:30 a.m. [iCal]

Traditional dialog systems used in goal-oriented applications require a lot of domain-specific handcrafting, which hinders scaling up to new domains. End- to-end dialog systems, in which all components are trained from the dialogs themselves, escape this limitation. But the encouraging successes recently obtained in chit-chat dialog may not carry over to goal-oriented settings. In this talk, we will discuss how to evaluate end-to-end goal oriented dialog systems in a robust and reproducible manner. We will also present a new testbed designed to that end. On this new dataset, we show that an end-to-end dialog system based on Memory Networks can reach promising, yet imperfect, performance and learn to perform non-trivial operations. We confirm those results by comparing our system to a hand-crafted slot-filling baseline on data from the second Dialog State Tracking Challenge (Henderson et al., 2014a) and show similar result patterns on data extracted from an online concierge service.

Antoine Bordes
Sat 7:30 a.m. - 8:10 a.m. [iCal]

Non-goal orientated dialogue systems can provide users with key information, support decision making, facilitate user action or simply chat for the sake of having some company. The key question is how do we define what it means to be an engaging conversationalist and how can we measure this automatically?  This talk will take a multidisciplinary approach to evaluating data-driven, non-goal orientated dialogue systems, taking insights from the gaming industry, HCI, psychology and cognitive theory.

Helen Hastie
Sat 8:10 a.m. - 8:30 a.m. [iCal]

In this paper, we introduce a novel memory network model using an end-to-end differentiable memory access regulation mechanism. It is inspired by the current progress on the connection short-cutting principle in the field of computer vision. We name it Gated End-to-End Memory Network (GMemN2N). From the machine learning perspective, this new capability is learned in an end-to-end fashion without the use of any additional supervision signal which is, as far as our knowledge goes, the first of its kind. Our experiments show improvements on all of the Dialog bAbI tasks, particularly on the real human-bot conversion-based Dialog State Tracking Challenge (DSTC2) dataset. This method does not require the use of any domain knowledge. Our model sets a new state of the art of end-to-end trainable dialog systems on this dataset.

Julien Perez
Sat 8:30 a.m. - 8:50 a.m. [iCal]

Researchers have recently started investigating deep neural networks for dialogue applications. In particular, generative sequence-to-sequence (Seq2Seq) models have shown promising results for unstructured tasks, such as word-level dialogue response generation. The hope is that such models will be able to leverage massive amounts of data to learn meaningful natural language representations and response generation strategies, while requiring a minimum amount of domain knowledge and hand-crafting. We review recently proposed models based on generative encoder-decoder neural network architectures, and show that these models have better ability to incorporate long-term dialogue history, to model uncertainty and ambiguity in dialogue, and to generate responses with high-level compositional structure.

Iulian Vlad Serban
Sat 8:50 a.m. - 9:00 a.m. [iCal]
Mini Break (Break)
Sat 9:00 a.m. - 9:20 a.m. [iCal]
Panel Session 3 (Panel Session)
Sat 9:20 a.m. - 9:20 a.m. [iCal]
La Fin (Closing)

Author Information

Hal Daumé III (Microsoft Research & University of Maryland)

Hal Daumé III wields a professor appointment in Computer Science and Language Science at the University of Maryland, and spends time as a principal researcher in the machine learning group and fairness group at Microsoft Research in New York City. He and his wonderful advisees study questions related to how to get machines to become more adept at human language, by developing models and algorithms that allow them to learn from data. The two major questions that really drive their research these days are: (1) how can we get computers to learn language through natural interaction with people/users? and (2) how can we do this in a way that promotes fairness, transparency and explainability in the learned models?

Paul Mineiro (Microsoft)
Amanda Stent (Yahoo, Inc.)
Jason E Weston (Facebook AI Research)

Jason Weston received a PhD. (2000) from Royal Holloway, University of London under the supervision of Vladimir Vapnik. From 2000 to 2002, he was a researcher at Biowulf technologies, New York, applying machine learning to bioinformatics. From 2002 to 2003 he was a research scientist at the Max Planck Institute for Biological Cybernetics, Tuebingen, Germany. From 2004 to June 2009 he was a research staff member at NEC Labs America, Princeton. From July 2009 onwards he has been a research scientist at Google, New York. Jason Weston's current research focuses on various aspects of statistical machine learning and its applications, particularly in text and images.

More from the Same Authors