Timezone: »
In this paper, we explore a novel knowledge-transfer task, termed as Deep Model Reassembly (DeRy), for general-purpose model reuse.Given a collection of heterogeneous models pre-trained from distinct sources and with diverse architectures, the goal of DeRy, as its name implies, is to first dissect each model into distinctive building blocks, and then selectively reassemble the derived blocks to produce customized networks under both the hardware resource and performance constraints. Such ambitious nature of DeRy inevitably imposes significant challenges, including, in the first place, the feasibility of its solution. We strive to showcase that, through a dedicated paradigm proposed in this paper, DeRy can be made not only possibly but practically efficiently. Specifically, we conduct the partitions of all pre-trained networks jointly via a cover set optimization, and derive a number of equivalence set, within each of which the network blocks are treated as functionally equivalent and hence interchangeable. The equivalence sets learned in this way, in turn, enable picking and assembling blocks to customize networks subject to certain constraints, which is achieved via solving an integer program backed up with a training-free proxy to estimate the task performance. The reassembled models give rise to gratifying performances with the user-specified constraints satisfied. We demonstrate that on ImageNet, the best reassemble model achieves 78.6% top-1 accuracy without fine-tuning, which could be further elevated to 83.2% with end-to-end fine-tuning. Our code is available at https://github.com/Adamdad/DeRy.
Author Information
Xingyi Yang (National University of Singapore)
Xingyi Yang is a second-year Ph.D student at National University of Singapore(NUS) at Learning and Vision Lab. I am now working under the supervision of Prof.Xinchao Wang.
Daquan Zhou (National University of Singapore)
Songhua Liu (National University of Singapore)
Jingwen Ye (National University of Singapore)
Xinchao Wang
More from the Same Authors
-
2022 Poster: Inception Transformer »
Chenyang Si · Weihao Yu · Pan Zhou · Yichen Zhou · Xinchao Wang · Shuicheng Yan -
2023 Poster: Towards Personalized Federated Learning via Heterogeneous Model Reassembly »
Jiaqi Wang · Xingyi Yang · Suhan Cui · Liwei Che · Lingjuan Lyu · Dongkuan (DK) Xu · Fenglong Ma -
2023 Poster: Generator Born from Classifier »
Runpeng Yu · Xinchao Wang -
2023 Poster: Frequency-Enhanced Data Augmentation for Vision-and-Language Navigation »
Keji He · Chenyang Si · Zhihe Lu · Yan Huang · Liang Wang · Xinchao Wang -
2023 Poster: Structural Pruning for Diffusion Models »
Gongfan Fang · Xinyin Ma · Xinchao Wang -
2023 Poster: Mixed Samples as Probes for Unsupervised Model Selection in Domain Adaptation »
Dapeng Hu · Jian Liang · Jun Hao Liew · Chuhui Xue · Song Bai · Xinchao Wang -
2023 Poster: GraphAdapter: Tuning Vision-Language Models With Dual Knowledge Graph »
Xin Li · Dongze Lian · Zhihe Lu · Jiawang Bai · Zhibo Chen · Xinchao Wang -
2023 Poster: LLM-Pruner: On the Structural Pruning of Large Language Models »
Xinyin Ma · Gongfan Fang · Xinchao Wang -
2023 Poster: Backprop-Free Dataset Distillation »
Songhua Liu · Xinchao Wang -
2023 Poster: Expanding Small-Scale Datasets with Guided Imagination »
Yifan Zhang · Daquan Zhou · Bryan Hooi · Kai Wang · Jiashi Feng -
2022 Spotlight: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 Spotlight: Dataset Distillation via Factorization »
Songhua Liu · Kai Wang · Xingyi Yang · Jingwen Ye · Xinchao Wang -
2022 Spotlight: Inception Transformer »
Chenyang Si · Weihao Yu · Pan Zhou · Yichen Zhou · Xinchao Wang · Shuicheng Yan -
2022 Spotlight: Lightning Talks 2B-1 »
Yehui Tang · Jian Wang · Zheng Chen · man zhou · Peng Gao · Chenyang Si · SHANGKUN SUN · Yixing Xu · Weihao Yu · Xinghao Chen · Kai Han · Hu Yu · Yulun Zhang · Chenhui Gou · Teli Ma · Yuanqi Chen · Yunhe Wang · Hongsheng Li · Jinjin Gu · Jianyuan Guo · Qiman Wu · Pan Zhou · Yu Zhu · Jie Huang · Chang Xu · Yichen Zhou · Haocheng Feng · Guodong Guo · yongbing zhang · Ziyi Lin · Feng Zhao · Ge Li · Junyu Han · Jinwei Gu · Jifeng Dai · Chao Xu · Xinchao Wang · Linghe Kong · Shuicheng Yan · Yu Qiao · Chen Change Loy · Xin Yuan · Errui Ding · Yunhe Wang · Deyu Meng · Jingdong Wang · Chongyi Li -
2022 Poster: Training Spiking Neural Networks with Local Tandem Learning »
Qu Yang · Jibin Wu · Malu Zhang · Yansong Chua · Xinchao Wang · Haizhou Li -
2022 Poster: Dataset Distillation via Factorization »
Songhua Liu · Kai Wang · Xingyi Yang · Jingwen Ye · Xinchao Wang -
2022 Poster: Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning »
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang -
2022 Poster: Sharpness-Aware Training for Free »
JIAWEI DU · Daquan Zhou · Jiashi Feng · Vincent Tan · Joey Tianyi Zhou -
2021 Poster: All Tokens Matter: Token Labeling for Training Better Vision Transformers »
Zi-Hang Jiang · Qibin Hou · Li Yuan · Daquan Zhou · Yujun Shi · Xiaojie Jin · Anran Wang · Jiashi Feng -
2020 Poster: ConvBERT: Improving BERT with Span-based Dynamic Convolution »
Zi-Hang Jiang · Weihao Yu · Daquan Zhou · Yunpeng Chen · Jiashi Feng · Shuicheng Yan -
2020 Spotlight: ConvBERT: Improving BERT with Span-based Dynamic Convolution »
Zi-Hang Jiang · Weihao Yu · Daquan Zhou · Yunpeng Chen · Jiashi Feng · Shuicheng Yan