Timezone: »
To what extent is the success of deep visualization due to the training? Could we do deep visualization using untrained, random weight networks? To address this issue, we explore new and powerful generative models for three popular deep visualization tasks using untrained, random weight convolutional neural networks. First we invert representations in feature spaces and reconstruct images from white noise inputs. The reconstruction quality is statistically higher than that of the same method applied on well trained networks with the same architecture. Next we synthesize textures using scaled correlations of representations in multiple layers and our results are almost indistinguishable with the original natural texture and the synthesized textures based on the trained network. Third, by recasting the content of an image in the style of various artworks, we create artistic images with high perceptual quality, highly competitive to the prior work of Gatys et al. on pretrained networks. To our knowledge this is the first demonstration of image representations using untrained deep neural networks. Our work provides a new and fascinating tool to study the representation of deep network architecture and sheds light on new understandings on deep visualization. It may possibly lead to a way to compare network architectures without training.
Author Information
Kun He (Huazhong University of Science and Technology)
Professor of Computer Science. Research area include algorithms, data mining, machine learning, deep learning.
Yan Wang (HUAZHONG UNIVERSITY OF SCIENCE)
John Hopcroft (Cornell University)
More from the Same Authors
-
2023 Poster: FABind: Fast and Accurate Protein-Ligand Binding »
Qizhi Pei · Kaiyuan Gao · Lijun Wu · Jinhua Zhu · Yingce Xia · Shufang Xie · Tao Qin · Kun He · Tie-Yan Liu · Rui Yan -
2023 Poster: Rethinking the Backward Propagation for Adversarial Transferability »
Wang Xiaosen · Kangheng Tong · Kun He -
2022 Spotlight: Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power »
Binghui Li · Jikai Jin · Han Zhong · John Hopcroft · Liwei Wang -
2022 Spotlight: Lightning Talks 2A-1 »
Caio Kalil Lauand · Ryan Strauss · Yasong Feng · lingyu gu · Alireza Fathollah Pour · Oren Mangoubi · Jianhao Ma · Binghui Li · Hassan Ashtiani · Yongqi Du · Salar Fattahi · Sean Meyn · Jikai Jin · Nisheeth Vishnoi · zengfeng Huang · Junier B Oliva · yuan zhang · Han Zhong · Tianyu Wang · John Hopcroft · Di Xie · Shiliang Pu · Liwei Wang · Robert Qiu · Zhenyu Liao -
2022 Poster: Why Robust Generalization in Deep Learning is Difficult: Perspective of Expressive Power »
Binghui Li · Jikai Jin · Han Zhong · John Hopcroft · Liwei Wang -
2018 Poster: Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation »
Liwei Wang · Lunjia Hu · Jiayuan Gu · Zhiqiang Hu · Yue Wu · Kun He · John Hopcroft -
2018 Spotlight: Towards Understanding Learning Representations: To What Extent Do Different Neural Networks Learn the Same Representation »
Liwei Wang · Lunjia Hu · Jiayuan Gu · Zhiqiang Hu · Yue Wu · Kun He · John Hopcroft -
2013 Poster: Sign Cauchy Projections and Chi-Square Kernel »
Ping Li · Gennady Samorodnitsk · John Hopcroft