Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on neuro Causal and Symbolic AI (nCSI)

The Impact of Symbolic Representations on In-context Learning for Few-shot Reasoning

Hanlin Zhang · yifan zhang · Li Erran Li · Eric Xing


Abstract:

Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations for in-context learning. On the other hand, those reasoning tasks are usually presumed to be more approachable for symbolic programming. To make progress towards understanding in-context learning, we revisit neuro-symbolic approaches and design a model LMLP that learns from demonstrations containing logic rules and corresponding examples to iteratively reason over knowledge bases (KBs). Such a procedure makes explicit correspondence between LMs' outputs and predicates in the KBs to recover Prolog’s backward chaining algorithm. Comprehensive experiments are included to systematically compare LMLP with their natural language counterparts like ``chain-of-thought'' (CoT) in deductive and inductive reasoning settings, which demonstrates that LMLP enjoys much better efficiency and length generalization in various settings.

Chat is not available.