Neuro-symbolic AI approaches have recently begun to generate significant interest, as urgency in the field appears to be growing around various ideas for somehow extending the strengths and success of neural networks (or machine learning, more broadly) with capabilities typically found in symbolic, or classical AI (such as knowledge representation and reasoning). A general aim of this research is to create a new class of far more powerful than the sum of its parts, and leverage the best of both worlds while simultaneously addressing the shortcomings of each. Typical advantages sought include the ability to:
-Perform reasoning to solve more difficult problems
-Leverage explicit domain knowledge where available
-Learn with many fewer examples
-Provide understandable or verifiable decisions
These abilities are particularly relevant to the adoption of AI in a broader array of industrial and societal problems where data is scarce, the stakes are higher, and where the scrutability of systems is important.
This research direction is at once an old pursuit and nascent, and several perspectives are expected to be needed in order to solve this grand challenge. In this workshop we will explore several points of view, both from industry and academia, and highlight strong recent and emerging results that we believe are providing new fundamental insights for the area and also beginning to demonstrate state-of-the-art results on both the theoretical side and the applied side.
Sun 10:00 a.m. - 10:10 a.m.
|
Opening Remarks
(
Opening remarks
)
|
David Cox 🔗 |
Sun 10:10 a.m. - 10:30 a.m.
|
Opening Remarks & Logical Neural Networks
(
Talk
)
We introduce Logical Neural Networks, a new neuro-symbolic framework which creates a 1-to-1 correspondence between a modified form of the standard differentiable neuron and a logic gate in a weighted form of real-valued logic. The key modifications of the neuron model are a) the addition of an ability to perform inference in the reverse direction in order to perform the equivalent of logical inference rules such as modus ponens, within the message-passing paradigm of neural networks, and b) learning with constraints on the weights in order to enforce logical behavior plus a new kind loss term, contradiction loss, which maximizes logical consistency in the face of imperfect and inconsistent knowledge. The result differs significantly from other neuro-symbolic ideas in that 1) the model is fully disentangled and understandable since every neuron has a meaning, 2) the model can perform both classical logical deduction and its real-valued generalization (which allows for the representation and propagation of uncertainty) exactly, as special cases, as opposed to approximately as in nearly all other approaches, and 3) the model is compositional and modular, allowing for fully reusable knowledge across tasks. |
Alexander Gray 🔗 |
Sun 10:30 a.m. - 10:45 a.m.
|
Real-valued reasoning
(
Talk
)
SlidesLive Video » We introduce Logical Neural Networks, a new neuro-symbolic framework which creates a 1-to-1 correspondence between a modified form of the standard differentiable neuron and a logic gate in a weighted form of real-valued logic. The key modifications of the neuron model are a) the addition of an ability to perform inference in the reverse direction in order to perform the equivalent of logical inference rules such as modus ponens, within the message-passing paradigm of neural networks, and b) learning with constraints on the weights in order to enforce logical behavior plus a new kind loss term, contradiction loss, which maximizes logical consistency in the face of imperfect and inconsistent knowledge. The result differs significantly from other neuro-symbolic ideas in that 1) the model is fully disentangled and understandable since every neuron has a meaning, 2) the model can perform both classical logical deduction and its real-valued generalization (which allows for the representation and propagation of uncertainty) exactly, as special cases, as opposed to approximately as in nearly all other approaches, and 3) the model is compositional and modular, allowing for fully reusable knowledge across tasks. |
Ronald Fagin 🔗 |
Sun 10:45 a.m. - 10:55 a.m.
|
Decision procedures for real valued reasoning
(
Talk
)
SlidesLive Video » We introduce Logical Neural Networks, a new neuro-symbolic framework which creates a 1-to-1 correspondence between a modified form of the standard differentiable neuron and a logic gate in a weighted form of real-valued logic. The key modifications of the neuron model are a) the addition of an ability to perform inference in the reverse direction in order to perform the equivalent of logical inference rules such as modus ponens, within the message-passing paradigm of neural networks, and b) learning with constraints on the weights in order to enforce logical behavior plus a new kind loss term, contradiction loss, which maximizes logical consistency in the face of imperfect and inconsistent knowledge. The result differs significantly from other neuro-symbolic ideas in that 1) the model is fully disentangled and understandable since every neuron has a meaning, 2) the model can perform both classical logical deduction and its real-valued generalization (which allows for the representation and propagation of uncertainty) exactly, as special cases, as opposed to approximately as in nearly all other approaches, and 3) the model is compositional and modular, allowing for fully reusable knowledge across tasks. |
Ryan Riegel 🔗 |
Sun 10:55 a.m. - 11:00 a.m.
|
Q/A (Logical Neural Networks, Real valued reasoning, Decision procedures for real valued reasoning)
(
Q/A Session
)
|
🔗 |
Sun 11:00 a.m. - 11:13 a.m.
|
Project Deep Thinking: A Neuro-Symbolic approach to knowledge base question answering Parsing
(
Talk
)
Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large training datasets. In this work, we propose a semantic parsing and reasoning-based Deep Thinking Question Answering (DTQA) system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a novel path-based approach to transform AMR parses into candidate logical queries that are aligned to the KB; (3) a neuro-symbolic reasoner called Logical Neural Network (LNN) that executes logical queries and reasons over KB facts to provide an answer; (4) system of systems approach, which integrates multiple, reusable modules that are trained specifically for their individual tasks (e.g. semantic parsing, entity linking, and relationship linking) and do not require end-to-end training data. DTQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0. DTQA's novelty lies in its modular neuro-symbolic architecture and its task-general approach to interpreting natural language questions |
Salim Roukos 🔗 |
Sun 11:13 a.m. - 11:26 a.m.
|
State-of-the-art Question Answering via a Neuro-symbolic Approach
(
Talk
)
SlidesLive Video » Knowledge base question answering (KBQA) is an important task in Natural Language Processing. Existing approaches face significant challenges including complex question understanding, necessity for reasoning, and lack of large training datasets. In this work, we propose a semantic parsing and reasoning-based Deep Thinking Question Answering (DTQA) system, that leverages (1) Abstract Meaning Representation (AMR) parses for task-independent question understanding; (2) a novel path-based approach to transform AMR parses into candidate logical queries that are aligned to the KB; (3) a neuro-symbolic reasoner called Logical Neural Network (LNN) that executes logical queries and reasons over KB facts to provide an answer; (4) system of systems approach, which integrates multiple, reusable modules that are trained specifically for their individual tasks (e.g. semantic parsing, entity linking, and relationship linking) and do not require end-to-end training data. DTQA achieves state-of-the-art performance on QALD-9 and LC-QuAD 1.0. DTQA's novelty lies in its modular neuro-symbolic architecture and its task-general approach to interpreting natural language questions |
Pavan Kapanipathi 🔗 |
Sun 11:26 a.m. - 11:30 a.m.
|
Q/A (Project Deep Thinking and State-of-the-art Question Answering)
(
Q/A Session
)
|
🔗 |
Sun 11:30 a.m. - 12:05 p.m.
|
Doing for our robots what nature did for us
(
Talk
)
|
Leslie Kaelbling 🔗 |
Sun 12:05 p.m. - 12:10 p.m.
|
Q/A (Doing for our robots what nature did for us)
(
Q/A Session
)
|
🔗 |
Sun 12:10 p.m. - 12:25 p.m.
|
Neuro-Symbolic Visual Concept Learning
(
Talk
)
Humans are capable of learning visual concepts by jointly understanding vision and language. Imagine that someone with no prior knowledge of colors is presented with the images of the red and green objects, paired with descriptions. They can easily identify the difference in objects' visual appearance (in this case, color), and align it to the corresponding words. This intuition motivates the use of image-text pairs to facilitate automated visual concept learning and language acquisition. In the talk, I will present recent progress on neuro-symbolic models for visual concept learning and reasoning. These models learn visual concepts and their association with symbolic representations of language, only by looking at images and reading paired natural language texts. |
Jiajun Wu 🔗 |
Sun 12:25 p.m. - 12:30 p.m.
|
Q/A (Neuro-Symbolic Visual Concept Learning)
(
Q/A Session
)
|
🔗 |
Sun 12:30 p.m. - 12:45 p.m.
|
Combining Bayesian, neural network and symbolic approach to intuitive physics
(
Talk
)
Humans are capable of reasoning about physical phenomena by inferring laws of physics from a very limited set of observations. The inferred laws can potentially depend on unobserved properties, such as mass, texture, charge, etc. This sample-efficient physical reasoning is considered a core domain of human common-sense knowledge and hints at the existence of a physics engine in the head. In this paper, we propose a Bayesian symbolic framework for learning sample-efficient models of physical reasoning and prediction, which are of special interests in the field of intuitive physics. In our framework, the environment is represented by a top-down generative model with a collection of entities with some known and unknown properties as latent variables to capture uncertainty. The physics engine depends on physical laws which are modeled as interpretable symbolic expressions and are assumed to be functions of the latent properties of the entities interacting under simple Newtonian physics. As such, learning the laws is then reduced to symbolic regression and Bayesian inference methods are used to obtain the distribution of unobserved properties. These inference and regression steps are performed in an iterative manner following the expectation–maximization algorithm to infer the unknown properties and use them to learn the laws from a very small set of observations. We demonstrate that on three physics learning tasks that compared to the existing methods of learning physics, our proposed framework is more data-efficient, accurate and makes joint reasoning and learning possible. |
Akash Srivastava 🔗 |
Sun 12:45 p.m. - 12:50 p.m.
|
Q/A (Combining Bayesian, neural network and symbolic approach to intuitive physics)
(
Q/A Session
)
|
🔗 |
Sun 12:50 p.m. - 1:05 p.m.
|
Neurosymbolic Visual Reasoning
(
Talk
)
SlidesLive Video » In this talk, I will demonstrate how to combine the power of deep neural networks and classic symbolic AI to deal with challenges in video understanding. I will showcase the application of these methods to problems such as temporal and causal reasoning in videos and music generation from videos. |
Chuang Gan 🔗 |
Sun 1:05 p.m. - 1:10 p.m.
|
Q/A (Neurosymbolic Visual Reasoning)
(
Q/A Session
)
|
🔗 |
Sun 1:10 p.m. - 1:25 p.m.
|
TRAIL: Reinforcement Learning based Theorem Proving
(
Talk
)
Automated theorem provers have traditionally relied on manually tuned heuristics to guide how they perform proof search. Deep reinforcement learning has been proposed as a way to obviate the need for such heuristics, however, its deployment in automated theorem proving remains a challenge. We introduce TRAIL, a system that applies deep reinforcement learning to saturation-based theorem proving. TRAIL leverages (a) a novel neural representation of the state of a theorem prover and (b) a novel characterization of the inference selection process in terms of an attention-based action policy. We show through systematic analysis that these mechanisms allow TRAIL to significantly outperform previous reinforcement-learning-based theorem provers on two benchmark datasets for first-order logic automated theorem proving (proving around15% more theorems). |
Achille Fokoue 🔗 |
Sun 1:25 p.m. - 1:30 p.m.
|
Q/A (TRAIL: Reinforcement Learning Based Theorem Proving)
(
Q/A Session
)
|
🔗 |
Sun 1:30 p.m. - 1:45 p.m.
|
Challenges for Compositional Generalization
(
Talk
)
Intuitively, compositional generalization is about combining things you know in new ways to solve a task, with little or no additional training. People can do this very well, even young children, but neural networks struggle. This is important because the cost to generalize without such a mechanism can be exponential in the number of variables in the task representation. Recent work evaluating neural networks for compositional generalization has mostly focused on natural language translation. In this presentation I'll review some of that work and talk about our experiments on composition in a completely geometric domain with no language but rather concepts specified in a first-order logical language which has richer constraints than those imposed by context free grammars. Our preliminary results indicate that neural nets do not generalize compositionally in this setting either. |
Tim Klinger 🔗 |
Sun 1:45 p.m. - 1:50 p.m.
|
Q/A (Challenges for Compositional Generalization)
(
Q/A Session
)
|
🔗 |
Sun 1:50 p.m. - 1:55 p.m.
|
Closing Remarks
|
David Cox · Alexander Gray 🔗 |