Workshop
|
|
Rewards Encoding Environment Dynamics Improves Preference-based Reinforcement Learning
Katherine Metcalf · Miguel Sarabia · Barry-John Theobald
|
|
Workshop
|
|
Relative Behavioral Attributes: Filling the Gap between Symbolic Goal Specification and Reward Learning from Human Preferences
Lin Guan · Karthik Valmeekam · Subbarao Kambhampati
|
|
Workshop
|
|
Symbol Guided Hindsight Priors for Reward Learning from Human Preferences
Mudit Verma · Katherine Metcalf
|
|
Workshop
|
Sat 12:35
|
Anca Dragan: Learning human preferences from language
Anca Dragan
|
|
Workshop
|
|
Efficient Preference-Based Reinforcement Learning Using Learned Dynamics Models
Yi Liu · Gaurav Datta · Ellen Novoseller · Daniel Brown
|
|
Poster
|
Wed 9:00
|
Explaining Preferences with Shapley Values
Robert Hu · Siu Lun Chau · Jaime Ferrando Huertas · Dino Sejdinovic
|
|
Workshop
|
|
Towards customizable reinforcement learning agents: Enabling preference specification through online vocabulary expansion
Utkarsh Soni · Sarath Sreedharan · Mudit Verma · Lin Guan · Matthew Marquez · Subbarao Kambhampati
|
|
Poster
|
Tue 14:00
|
Diversified Recommendations for Agents with Adaptive Preferences
William Brown · Arpit Agarwal
|
|
Poster
|
Wed 14:00
|
How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios
Mantas Mazeika · Eric Tang · Andy Zou · Steven Basart · Jun Shern Chan · Dawn Song · David Forsyth · Jacob Steinhardt · Dan Hendrycks
|
|
Poster
|
Tue 9:00
|
Learning from Stochastically Revealed Preference
John Birge · Xiaocheng Li · Chunlin Sun
|
|
Poster
|
Thu 9:00
|
Invariance Learning based on Label Hierarchy
Shoji Toyota · Kenji Fukumizu
|
|
Poster
|
Thu 9:00
|
Unsupervised Learning of Equivariant Structure from Sequences
Takeru Miyato · Masanori Koyama · Kenji Fukumizu
|
|