Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Foundation Models for Decision Making

RF-POLICY: Rectified Flows are Computation-Adaptive Decision Makers

Xixi Hu · Bo Liu · Xingchao Liu · Qiang Liu


Abstract:

Diffusion-based imitation learning improves Behavioral Cloning (BC) on multi-modal decision-making but comes at the cost of significantly slower inference due to the recursion in the diffusion process. However, in real-world scenarios, states that require multi-modal decision-making are rare, and the huge consumption of diffusion models is not necessary for most cases. It inspires us to design efficient policy generators that can wisely allocate computation for different contexts. To address this challenge, we propose RF-POLICY (Rectified Flow-Policy), an imitation learning algorithm based on Rectified Flow, a recent advancement in flow-based generative modeling~\citep{liu2022flow}. RF-POLICY adopts probability flow ordinary differential equations (ODEs) for diverse policy generation, with the learning principle of following straight trajectories as much as possible. We uncover and leverage a surprisingly intriguing advantage of these flow-based models over previous diffusion models: their training objective indicates the uncertainty of a certain state, andwhen the state is uni-modal, they automatically reduce to one-step generators since the probability flows admit straight lines.Therefore,RF-POLICY is naturally an adaptive decision maker, offering rapid inference without sacrificing diversity.Our comprehensive empirical evaluation shows that RF-POLICY, to the best of our knowledge, is the first algorithm to achieve high performance across all dimensions, including success rate, behavioral diversity, and inference speed.

Chat is not available.