Skip to yearly menu bar Skip to main content


Poster
in
Affinity Workshop: WiML Workshop 1

Syntax-enhanced Dialogue Summarization using Syntax-aware information

Seolhwa Lee · Kisu Yang · Chanjun Park · Heuiseok Lim


Abstract:

During the COVID-19 pandemic, a virtual conversation tool like Zoom is inevitable. With this much demand, dialogue summarization has emerged as a means to summarize the dialogues. There are two challenges in dialogue summarization as follows: firstly, multiple speakers from different textual styles participate in dialogue, and secondly, informal dialogue structures (e.g., slang, colloquial representation). To address these challenges, we investigated the relationship between textual styles and representative attributes of utterances. [1] proposed that the types (e.g., intent or role of a speaker) of sentences from speakers are associated with different syntactic structures, such as part-of-speech (POS) tagging. This is derived from the fact that different speaker roles are characterized by different syntactic structures. In essence, the uttered text has a unique representation from each speaker, like a voiceprint (i.e., identity information from the human voice [2]. Based on this prior research, we began our study with the assumption that because syntactic structures tend to be associated with a representative of a sentence uttered from speakers, these structures would help distinguish the different styles of utterances. In this work, we propose a novel abstractive dialogue summarization model for use in a daily conversation setting, characterized by an informal style of text that employs multi-task learning to learn linguistic information and dialogue summarization simultaneously.

Chat is not available.