Skip to yearly menu bar Skip to main content


Poster
in
Workshop: New Frontiers in Graph Learning (GLFrontiers)

CuriousWalk: Enhancing Multi-Hop Reasoning in Graphs with Random Network Distillation

Varun Kausika · Saurabh Jha · Adya Jha · Amy Zhang · Michael Sury

Keywords: [ random network distillation ] [ Reinforcement Learning ] [ Multi-Hop Reasoning ]


Abstract:

Structured knowledge bases in the forms of graphs often suffer from incompleteness and inaccuracy in representing information. One popular method of densifying graphs involves constructing a reinforcement learning agent that learns to traverse entities and relations in a sequential way from a query entity, according to a query relation until it reaches the desired answer entity. However, these agents are often limited by sparse reward structures of the environment, as well as their inability to find diverse paths from the question to the answer entities. In this paper, we attempt to address these issues by augmenting the agent with intrinsic rewards which can help in exploration as well as offering meaningful feedback at intermediate steps to push the agent in the right direction.

Chat is not available.