Skip to yearly menu bar Skip to main content


Search All 2022 Events
 

17 Results

<<   <   Page 1 of 2   >   >>
Poster
Thu 9:00 MoCoDA: Model-based Counterfactual Data Augmentation
Silviu Pitis · Elliot Creager · Ajay Mandlekar · Animesh Garg
Poster
A Policy-Guided Imitation Approach for Offline Reinforcement Learning
Haoran Xu · Li Jiang · Li Jianxiong · Xianyuan Zhan
Poster
Wed 14:00 A Near-Optimal Primal-Dual Method for Off-Policy Learning in CMDP
Fan Chen · Junyu Zhang · Zaiwen Wen
Workshop
Offline Policy Evaluation for Reinforcement Learning with Adaptively Collected Data
Sunil Madhow · Dan Qiao · Yu-Xiang Wang
Workshop
AMORE: A Model-based Framework for Improving Arbitrary Baseline Policies with Offline Data
Tengyang Xie · Mohak Bhardwaj · Nan Jiang · Ching-An Cheng
Workshop
Diffusion Policies as an Expressive Policy Class for Offline Reinforcement Learning
Zhendong Wang · jonathan j hunt · Mingyuan Zhou
Poster
Tue 9:00 A Unified Framework for Alternating Offline Model Training and Policy Learning
Shentao Yang · Shujian Zhang · Yihao Feng · Mingyuan Zhou
Workshop
Keep Calm and Carry Offline: Policy refinement in offline reinforcement learning
Alex Beeson · Giovanni Montana
Poster
Wed 14:00 NeoRL: A Near Real-World Benchmark for Offline Reinforcement Learning
Rong-Jun Qin · Xingyuan Zhang · Songyi Gao · Xiong-Hui Chen · Zewen Li · Weinan Zhang · Yang Yu
Workshop
Efficient Offline Policy Optimization with a Learned Model
Zichen Liu · Siyi Li · Wee Sun Lee · Shuicheng Yan · Zhongwen Xu
Workshop
Fine-tuning Offline Policies with Optimistic Action Selection
Max Sobol Mark · Ali Ghadirzadeh · Xi Chen · Chelsea Finn
Workshop
Offline Reinforcement Learning with Closed-Form Policy Improvement Operators
Jiachen Li · Edwin Zhang · Ming Yin · Qinxun Bai · Yu-Xiang Wang · William Yang Wang