Timezone: »
The growing literature of Federated Learning (FL) has recently inspired Federated Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better decision-making policy without sharing raw trajectories. Despite its promising applications, existing works on FRL fail to I) provide theoretical analysis on its convergence, and II) account for random system failures and adversarial attacks. Towards this end, we propose the first FRL framework the convergence of which is guaranteed and tolerant to less than half of the participating agents being random system failures or adversarial attackers. We prove that the sample efficiency of the proposed framework is guaranteed to improve with the number of agents and is able to account for such potential failures or attacks. All theoretical results are empirically verified on various RL benchmark tasks.
Author Information
Xiaofeng Fan (National University of Singapore)
Yining Ma (National University of Singapore)
Zhongxiang Dai (National University of Singapore)
Wei Jing (Alibaba Group)
Cheston Tan (Institute for Infocomm Research, Singapore)
Bryan Kian Hsiang Low (National University of Singapore)
More from the Same Authors
-
2022 Poster: Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation »
Jieyi Bi · Yining Ma · Jiahai Wang · Zhiguang Cao · Jinbiao Chen · Yuan Sun · Yeow Meng Chee -
2022 Spotlight: Lightning Talks 5B-3 »
Yanze Wu · Jie Xiao · Nianzu Yang · Jieyi Bi · Jian Yao · Yiting Chen · Qizhou Wang · Yangru Huang · Yongqiang Chen · Peixi Peng · Yuxin Hong · Xintao Wang · Feng Liu · Yining Ma · Qibing Ren · Xueyang Fu · Yonggang Zhang · Kaipeng Zeng · Jiahai Wang · GEN LI · Yonggang Zhang · Qitian Wu · Yifan Zhao · Chiyu Wang · Junchi Yan · Feng Wu · Yatao Bian · Xiaosong Jia · Ying Shan · Zhiguang Cao · Zheng-Jun Zha · Guangyao Chen · Tianjun Xiao · Han Yang · Jing Zhang · Jinbiao Chen · MA Kaili · Yonghong Tian · Junchi Yan · Chen Gong · Tong He · Binghui Xie · Yuan Sun · Francesco Locatello · Tongliang Liu · Yeow Meng Chee · David P Wipf · Tongliang Liu · Bo Han · Bo Han · Yanwei Fu · James Cheng · Zheng Zhang -
2022 Spotlight: Learning Generalizable Models for Vehicle Routing Problems via Knowledge Distillation »
Jieyi Bi · Yining Ma · Jiahai Wang · Zhiguang Cao · Jinbiao Chen · Yuan Sun · Yeow Meng Chee -
2022 Poster: Trade-off between Payoff and Model Rewards in Shapley-Fair Collaborative Machine Learning »
Quoc Phong Nguyen · Bryan Kian Hsiang Low · Patrick Jaillet -
2022 Poster: Sample-Then-Optimize Batch Neural Thompson Sampling »
Zhongxiang Dai · YAO SHU · Bryan Kian Hsiang Low · Patrick Jaillet -
2022 Poster: Unifying and Boosting Gradient-Based Training-Free Neural Architecture Search »
YAO SHU · Zhongxiang Dai · Zhaoxuan Wu · Bryan Kian Hsiang Low -
2021 : AVoE: A Synthetic 3D Dataset on Understanding Violation of Expectation for Artificial Cognition »
Arijit Dasgupta · Jiafei Duan · Marcelo Ang Jr · Cheston Tan -
2021 Workshop: New Frontiers in Federated Learning: Privacy, Fairness, Robustness, Personalization and Data Ownership »
Nghia Hoang · Lam Nguyen · Pin-Yu Chen · Tsui-Wei Weng · Sara Magliacane · Bryan Kian Hsiang Low · Anoop Deoras -
2021 Poster: Differentially Private Federated Bayesian Optimization with Distributed Exploration »
Zhongxiang Dai · Bryan Kian Hsiang Low · Patrick Jaillet -
2021 Poster: Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning »
Xinyi Xu · Lingjuan Lyu · Xingjun Ma · Chenglin Miao · Chuan Sheng Foo · Bryan Kian Hsiang Low -
2021 Poster: Learning to Iteratively Solve Routing Problems with Dual-Aspect Collaborative Transformer »
Yining Ma · Jingwen Li · Zhiguang Cao · Wen Song · Le Zhang · Zhenghua Chen · Jing Tang -
2021 Poster: Optimizing Conditional Value-At-Risk of Black-Box Functions »
Quoc Phong Nguyen · Zhongxiang Dai · Bryan Kian Hsiang Low · Patrick Jaillet -
2021 Poster: Validation Free and Replication Robust Volume-based Data Valuation »
Xinyi Xu · Zhaoxuan Wu · Chuan Sheng Foo · Bryan Kian Hsiang Low -
2020 Poster: Variational Bayesian Unlearning »
Quoc Phong Nguyen · Bryan Kian Hsiang Low · Patrick Jaillet -
2020 Poster: Federated Bayesian Optimization via Thompson Sampling »
Zhongxiang Dai · Bryan Kian Hsiang Low · Patrick Jaillet -
2020 Poster: Efficient Exploration of Reward Functions in Inverse Reinforcement Learning via Bayesian Optimization »
Sreejith Balakrishnan · Quoc Phong Nguyen · Bryan Kian Hsiang Low · Harold Soh -
2019 Poster: Implicit Posterior Variational Inference for Deep Gaussian Processes »
Haibin YU · Yizhou Chen · Bryan Kian Hsiang Low · Patrick Jaillet · Zhongxiang Dai -
2019 Spotlight: Implicit Posterior Variational Inference for Deep Gaussian Processes »
Haibin YU · Yizhou Chen · Bryan Kian Hsiang Low · Patrick Jaillet · Zhongxiang Dai -
2017 : Poster Session 2 »
Farhan Shafiq · Antonio Tomas Nevado Vilchez · Takato Yamada · Sakyasingha Dasgupta · Robin Geyer · Moin Nabi · Crefeda Rodrigues · Edoardo Manino · Alexantrou Serb · Miguel A. Carreira-Perpinan · Kar Wai Lim · Bryan Kian Hsiang Low · Rohit Pandey · Marie C White · Pavel Pidlypenskyi · Xue Wang · Christine Kaeser-Chen · Michael Zhu · Suyog Gupta · Sam Leroux -
2017 : Aligned AI Poster Session »
Amanda Askell · Rafal Muszynski · William Wang · Yaodong Yang · Quoc Nguyen · Bryan Kian Hsiang Low · Patrick Jaillet · Candice Schumann · Anqi Liu · Peter Eckersley · Angelina Wang · William Saunders -
2015 Poster: Inverse Reinforcement Learning with Locally Consistent Reward Functions »
Quoc Phong Nguyen · Bryan Kian Hsiang Low · Patrick Jaillet -
2013 Poster: Neural representation of action sequences: how far can a simple snippet-matching model take us? »
Cheston Tan · Jedediah M Singer · Thomas Serre · David Sheinberg · Tomaso Poggio