Timezone: »
We propose a new principle for transfer learning, based on a straightforward intuition: if two domains are similar to each other, the model trained on one domain should also perform well on the other domain, and vice versa. To formalize this intuition, we define the performance gap as a measure of the discrepancy between the source and target domains. We derive generalization bounds for the instance weighting approach to transfer learning, showing that the performance gap can be viewed as an algorithm-dependent regularizer, which controls the model complexity. Our theoretical analysis provides new insight into transfer learning and motivates a set of general, principled rules for designing new instance weighting schemes for transfer learning. These rules lead to gapBoost, a novel and principled boosting approach for transfer learning. Our experimental evaluation on benchmark data sets shows that gapBoost significantly outperforms previous boosting-based transfer learning algorithms.
Author Information
Boyu Wang (University of Western Ontario)
Jorge Mendez (University of Pennsylvania)
Mingbo Cai (University of Tokyo)
Eric Eaton (University of Pennsylvania)
More from the Same Authors
-
2022 : Land Use Prediction using Electro-Optical to SAR Few-Shot Transfer Learning »
Marcel Hussing · Karen Li · Eric Eaton -
2022 Spotlight: Lightning Talks 2B-4 »
Feiyi Xiao · Amrutha Saseendran · Kwangho Kim · Keyu Yan · Changjian Shui · Guangxi Li · Shikun Li · Edward Kennedy · Man Zhou · Gezheng Xu · Ruilin Ye · Xiaobo Xia · Junjie Tang · Kathrin Skubch · Stefan Falkner · Hansong Zhang · Jose Zubizarreta · Huaying Fang · Xuanqiang Zhao · Jie Huang · Qi CHEN · Yibing Zhan · Jiaqi Li · Xin Wang · Ruibin Xi · Feng Zhao · Margret Keuper · Charles Ling · Shiming Ge · Chengjun Xie · Tongliang Liu · Tal Arbel · Chongyi Li · Danfeng Hong · Boyu Wang · Christian Gagné -
2022 Spotlight: On Learning Fairness and Accuracy on Multiple Subgroups »
Changjian Shui · Gezheng Xu · Qi CHEN · Jiaqi Li · Charles Ling · Tal Arbel · Boyu Wang · Christian Gagné -
2022 Poster: On Learning Fairness and Accuracy on Multiple Subgroups »
Changjian Shui · Gezheng Xu · Qi CHEN · Jiaqi Li · Charles Ling · Tal Arbel · Boyu Wang · Christian Gagné -
2022 Affinity Workshop: LatinX in AI »
Maria Luisa Santiago · Juan Banda · CJ Barberan · MIGUEL GONZALEZ-MENDOZA · Caio Davi · Sara Garcia · Jorge Diaz · Fanny Nina Paravecino · Carlos Miranda · Gissella Bejarano Nicho · Fabian Latorre · Andres Munoz Medina · Abraham Ramos · Laura Montoya · Isabel Metzger · Andres Marquez · Miguel Felipe Arevalo-Castiblanco · Jorge Mendez · Karla Caballero · Atnafu Lambebo Tonja · Germán Olivo · Karla Caballero Barajas · Francisco Zabala -
2021 : Learning to perceive objects by prediction »
Tushar Arora · Li Erran Li · Mingbo Cai -
2021 : Learning to perceive objects by prediction »
Tushar Arora · Li Erran Li · Mingbo Cai -
2020 Poster: Lifelong Policy Gradient Learning of Factored Policies for Faster Training Without Forgetting »
Jorge Mendez · Boyu Wang · Eric Eaton -
2018 Poster: Lifelong Inverse Reinforcement Learning »
Jorge Mendez · Shashank Shivkumar · Eric Eaton -
2016 Poster: A Bayesian method for reducing bias in neural representational similarity analysis »
Mingbo Cai · Nicolas W Schuck · Jonathan Pillow · Yael Niv