Timezone: »

 
Poster
First Order Constrained Optimization in Policy Space
Yiming Zhang · Quan Vuong · Keith Ross

Mon Dec 07 09:00 PM -- 11:00 PM (PST) @ Poster Session 0 #87

In reinforcement learning, an agent attempts to learn high-performing behaviors through interacting with the environment, such behaviors are often quantified in the form of a reward function. However some aspects of behavior—such as ones which are deemed unsafe and to be avoided—are best captured through constraints. We propose a novel approach called First Order Constrained Optimization in Policy Space (FOCOPS) which maximizes an agent's overall reward while ensuring the agent satisfies a set of cost constraints. Using data generated from the current policy, FOCOPS first finds the optimal update policy by solving a constrained optimization problem in the nonparameterized policy space. FOCOPS then projects the update policy back into the parametric policy space. Our approach has an approximate upper bound for worst-case constraint violation throughout training and is first-order in nature therefore simple to implement. We provide empirical evidence that our simple approach achieves better performance on a set of constrained robotics locomotive tasks.

Author Information

Yiming Zhang (New York University)
Quan Vuong (University of California, San Diego)
Keith Ross (NYU Shanghai)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors

  • 2022 : CW-ERM: Improving Autonomous Driving Planning with Closed-loop Weighted Empirical Risk Minimization »
    Eesha Kumar · Yiming Zhang · Stefano Pini · Simon Stent · Ana Sofia Rufino Ferreira · Sergey Zagoruyko · Christian Perone
  • 2022 : Aggressive Q-Learning with Ensembles: Achieving Both High Sample Efficiency and High Asymptotic Performance »
    Yanqiu Wu · Xinyue Chen · Che Wang · Yiming Zhang · Keith Ross
  • 2022 Poster: VRL3: A Data-Driven Framework for Visual Deep Reinforcement Learning »
    Che Wang · Xufang Luo · Keith Ross · Dongsheng Li
  • 2020 Poster: BAIL: Best-Action Imitation Learning for Batch Deep Reinforcement Learning »
    Xinyue Chen · Zijian Zhou · Zheng Wang · Che Wang · Yanqiu Wu · Keith Ross
  • 2018 : Poster Session 1 + Coffee »
    Tom Van de Wiele · Rui Zhao · J. Fernando Hernandez-Garcia · Fabio Pardo · Xian Yeow Lee · Xiaolin Andy Li · Marcin Andrychowicz · Jie Tang · Suraj Nair · Juhyeon Lee · Cédric Colas · S. M. Ali Eslami · Yen-Chen Wu · Stephen McAleer · Ryan Julian · Yang Xue · Matthia Sabatelli · Pranav Shyam · Alexandros Kalousis · Giovanni Montana · Emanuele Pesce · Felix Leibfried · Zhanpeng He · Chunxiao Liu · Yanjun Li · Yoshihide Sawada · Alexander Pashevich · Tejas Kulkarni · Keiran Paster · Luca Rigazio · Quan Vuong · Hyunggon Park · Minhae Kwon · Rivindu Weerasekera · Shamane Siriwardhanaa · Rui Wang · Ozsel Kilinc · Keith Ross · Yizhou Wang · Simon Schmitt · Thomas Anthony · Evan Cater · Forest Agostinelli · Tegg Sung · Shirou Maruyama · Alexander Shmakov · Devin Schwab · Mohammad Firouzi · Glen Berseth · Denis Osipychev · Jesse Farebrother · Jianlan Luo · William Agnew · Peter Vrancx · Jonathan Heek · Catalin Ionescu · Haiyan Yin · Megumi Miyashita · Nathan Jay · Noga H. Rotman · Sam Leroux · Shaileshh Bojja Venkatakrishnan · Henri Schmidt · Jack Terwilliger · Ishan Durugkar · Jonathan Sauder · David Kas · Arash Tavakoli · Alain-Sam Cohen · Philip Bontrager · Adam Lerer · Thomas Paine · Ahmed Khalifa · Ruben Rodriguez · Avi Singh · Yiming Zhang