Tutorial
Generating Programmatic Solutions: Algorithms and Applications of Programmatic Reinforcement Learning and Code Generation
Levi Lelis · Xinyun Chen · Shao-Hua Sun
East Exhibition Hall A
In this tutorial, we will present recent advances in program synthesis that enable the generation of programmatic policies for reinforcement learning and production software programs that satisfy user intent. The tutorial consists of two parts. In the first part of this tutorial, we consider the reinforcement learning (RL) setting, where the goal is to learn a policy that observes environments and acts optimally. Instead of representing policies using deep neural networks, programmatic RL (PRL) methods aim to synthesize program policies structured in a human-readable domain-specific language. PRL reformulates the RL into learning to write a program that can be executed in an environment and maximize the return, potentially yielding improved interpretability and generalizability. We will cover different families of algorithms that rely on search and learning-based methods, including those using large language models to help with the search for programmatic policies. In the second part of the tutorial, we consider code generation problems, where users provide their intent as input to a program synthesizer, which generates a program attempting to satisfy that intent. With the advancement of deep learning, neural networks and large language models (LLMs), with their impressive capabilities of understanding and reasoning over natural language and code, have revolutionized code generation. We will first discuss representative work on neural program synthesis, foundational techniques for developing LLMs for code generation, and emerging use cases of LLM-based coding agents. We will conclude this part of the tutorial with a discussion on the challenges and opportunities of LLMs for code generation.
Live content is unavailable. Log in and register to view live content