Timezone: »
Collaborations among multiple organizations, such as financial institutions, medical centers, and retail markets in decentralized settings are crucial to providing improved service and performance. However, the underlying organizations may have little interest in sharing their local data, models, and objective functions. These requirements have created new challenges for multi-organization collaboration. In this work, we propose Gradient Assisted Learning (GAL), a new method for multiple organizations to assist each other in supervised learning tasks without sharing local data, models, and objective functions. In this framework, all participants collaboratively optimize the aggregate of local loss functions, and each participant autonomously builds its own model by iteratively fitting the gradients of the overarching objective function. We also provide asymptotic convergence analysis and practical case studies of GAL. Experimental studies demonstrate that GAL can achieve performance close to centralized learning when all data, models, and objective functions are fully disclosed.
Author Information
Enmao Diao (Duke University)

I am a fourth-year Ph.D. candidate advised by Prof. Vahid Tarokh in Electrical Engineering at Duke University, Durham, North Carolina, USA. I was born in Chengdu, Sichuan, China in 1994. I received the B.S. degree in Computer Science and Electrical Engineering from Georgia Institute of Technology, Georgia, USA, in 2016 and the M.S. degree in Electrical Engineering from Harvard University, Cambridge, USA, in 2018.
Jie Ding (University of Minnesota)
Vahid Tarokh (Duke University)
More from the Same Authors
-
2021 : Benchmarking Data-driven Surrogate Simulators for Artificial Electromagnetic Materials »
Yang Deng · Juncheng Dong · Simiao Ren · Omar Khatib · Mohammadreza Soltani · Vahid Tarokh · Willie Padilla · Jordan Malof -
2022 : Building Large Machine Learning Models from Small Distributed Models: A Layer Matching Approach »
xinwei zhang · Bingqing Song · Mehrdad Honarkhah · Jie Ding · Mingyi Hong -
2022 : PerFedSI: A Framework for Personalized Federated Learning with Side Information »
Liam Collins · Enmao Diao · Tanya Roosta · Jie Ding · Tao Zhang -
2022 Spotlight: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 Poster: Self-Aware Personalized Federated Learning »
Huili Chen · Jie Ding · Eric W. Tramel · Shuang Wu · Anit Kumar Sahu · Salman Avestimehr · Tao Zhang -
2022 Poster: Inference and Sampling for Archimax Copulas »
Yuting Ng · Ali Hasan · Vahid Tarokh -
2022 Poster: SemiFL: Semi-Supervised Federated Learning for Unlabeled Clients with Alternate Training »
Enmao Diao · Jie Ding · Vahid Tarokh -
2020 Poster: Assisted Learning: A Framework for Multi-Organization Learning »
Xun Xian · Xinran Wang · Jie Ding · Reza Ghanadan -
2020 Spotlight: Assisted Learning: A Framework for Multi-Organization Learning »
Xun Xian · Xinran Wang · Jie Ding · Reza Ghanadan -
2019 Poster: Gradient Information for Representation and Modeling »
Jie Ding · Robert Calderbank · Vahid Tarokh -
2019 Poster: SpiderBoost and Momentum: Faster Variance Reduction Algorithms »
Zhe Wang · Kaiyi Ji · Yi Zhou · Yingbin Liang · Vahid Tarokh -
2018 Poster: Learning Bounds for Greedy Approximation with Explicit Feature Maps from Multiple Kernels »
Shahin Shahrampour · Vahid Tarokh