Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large training datasets. In this work, we propose a semantic parsing and reasoning-based Deep Thinking Question Answering (DTQA) system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a novel path-based approach to transform AMR parses into candidate logical queries that are aligned to the KB; (3) a neuro-symbolic reasoner called Logical Neural Network (LNN) that executes logical queries and reasons over KB facts to provide an answer; (4) system of systems approach, which integrates multiple, reusable modules that are trained specifically for their individual tasks (e.g. semantic parsing, entity linking, and relationship linking) and do not require end-to-end training data. DTQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0. DTQA's novelty lies in its modular neuro-symbolic architecture and its task-general approach to interpreting natural language questions
Salim Roukos (IBM)
Salim Roukos, IBM Fellow, working on multilingual NLP using Machine (and Deep) Learning models for language translation, information extraction, and language understanding.