Timezone: »
Recently complex-valued neural networks have received increasing attention due to successful applications in various tasks and the potential advantages of better theoretical properties and richer representational capacity. However, the training dynamics of complex networks compared to real networks remains an open problem. In this paper, we investigate the dynamics of deep complex networks during real-valued backpropagation in the infinite-width limit via neural tangent kernel (NTK). We first extend the Tensor Program to the complex domain, to show that the dynamics of any basic complex network architecture is governed by its NTK under real-valued backpropagation. Then we propose a way to investigate the comparison of training dynamics between complex and real networks by studying their NTKs. As a result, we surprisingly prove that for most complex activation functions, the commonly used real-valued backpropagation reduces the training dynamics of complex networks to that of ordinary real networks as the widths tend to infinity, thus eliminating the characteristics of complex-valued neural networks. Finally, the experiments validate our theoretical findings numerically.
Author Information
Zhi-Hao Tan (Nanjing University)
Yi Xie (Nanjing University)
Yuan Jiang (National Key lab for Novel Software Technology)
Zhi-Hua Zhou (Nanjing University)
More from the Same Authors
-
2022 Spotlight: Real-Valued Backpropagation is Unsuitable for Complex-Valued Neural Networks »
Zhi-Hao Tan · Yi Xie · Yuan Jiang · Zhi-Hua Zhou -
2022 Spotlight: Lightning Talks 3A-2 »
shuwen yang · Xu Zhang · Delvin Ce Zhang · Lan-Zhe Guo · Renzhe Xu · Zhuoer Xu · Yao-Xiang Ding · Weihan Li · Xingxuan Zhang · Xi-Zhu Wu · Zhenyuan Yuan · Hady Lauw · Yu Qi · Yi-Ge Zhang · Zhihao Yang · Guanghui Zhu · Dong Li · Changhua Meng · Kun Zhou · Gang Pan · Zhi-Fan Wu · Bo Li · Minghui Zhu · Zhi-Hua Zhou · Yafeng Zhang · Yingxueff Zhang · shiwen cui · Jie-Jing Shao · Zhanguang Zhang · Zhenzhe Ying · Xiaolong Chen · Yu-Feng Li · Guojie Song · Peng Cui · Weiqiang Wang · Ming GU · Jianye Hao · Yihua Huang -
2022 Spotlight: Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning »
Yao-Xiang Ding · Xi-Zhu Wu · Kun Zhou · Zhi-Hua Zhou -
2022 Poster: Adapting to Online Label Shift with Provable Guarantees »
Yong Bai · Yu-Jie Zhang · Zhi-Hua Zhou · Masashi Sugiyama · Zhi-Hua Zhou -
2022 Poster: Theoretically Provable Spiking Neural Networks »
Shao-Qun Zhang · Zhi-Hua Zhou -
2022 Poster: Pre-Trained Model Reusability Evaluation for Small-Data Transfer Learning »
Yao-Xiang Ding · Xi-Zhu Wu · Kun Zhou · Zhi-Hua Zhou -
2022 Poster: Sound and Complete Causal Identification with Latent Variables Given Local Background Knowledge »
Tian-Zuo Wang · Tian Qin · Zhi-Hua Zhou -
2022 Poster: Efficient Methods for Non-stationary Online Learning »
Zhi-Hua Zhou · Yan-Feng Xie · Lijun Zhang · Zhi-Hua Zhou -
2022 Poster: Depth is More Powerful than Width with Prediction Concatenation in Deep Forest »
Shen-Huan Lyu · Yi-Xiao He · Zhi-Hua Zhou -
2021 Poster: Fast Abductive Learning by Similarity-based Consistency Optimization »
Yu-Xuan Huang · Wang-Zhou Dai · Le-Wen Cai · Stephen H Muggleton · Yuan Jiang -
2020 Poster: Provably Robust Metric Learning »
Lu Wang · Xuanqing Liu · Jinfeng Yi · Yuan Jiang · Cho-Jui Hsieh -
2017 Poster: Improved Dynamic Regret for Non-degenerate Functions »
Lijun Zhang · Tianbao Yang · Jinfeng Yi · Rong Jin · Zhi-Hua Zhou -
2017 Poster: Learning with Feature Evolvable Streams »
Bo-Jian Hou · Lijun Zhang · Zhi-Hua Zhou -
2017 Poster: Subset Selection under Noise »
Chao Qian · Jing-Cheng Shi · Yang Yu · Ke Tang · Zhi-Hua Zhou -
2015 Poster: Subset Selection by Pareto Optimization »
Chao Qian · Yang Yu · Zhi-Hua Zhou