Timezone: »

Transfusion: Understanding Transfer Learning for Medical Imaging
Maithra Raghu · Chiyuan Zhang · Jon Kleinberg · Samy Bengio

Thu Dec 12 05:00 PM -- 07:00 PM (PST) @ East Exhibition Hall B + C #130

Transfer learning from natural image datasets, particularly ImageNet, using standard large models and corresponding pretrained weights has become a de-facto method for deep learning applications to medical imaging. However, there are fundamental differences in data sizes, features and task specifications between natural image classification and the target medical tasks, and there is little understanding of the effects of transfer. In this paper, we explore properties of transfer learning for medical imaging. A performance evaluation on two large scale medical imaging tasks shows that surprisingly, transfer offers little benefit to performance, and simple, lightweight models can perform comparably to ImageNet architectures. Investigating the learned representations and features, we find that some of the differences from transfer learning are due to the over-parametrization of standard models rather than sophisticated feature reuse. We isolate where useful feature reuse occurs, and outline the implications for more efficient model exploration. We also explore feature independent benefits of transfer arising from weight scalings.

Author Information

Maithra Raghu (Cornell University and Google Brain)
Chiyuan Zhang (Google Brain)
Jon Kleinberg (Cornell University)
Samy Bengio (Google Research, Brain Team)

More from the Same Authors