Timezone: »

Deep Hyperspherical Learning
Weiyang Liu · Yan-Ming Zhang · Xingguo Li · Zhiding Yu · Bo Dai · Tuo Zhao · Le Song

Tue Dec 05 05:20 PM -- 05:25 PM (PST) @ Hall A

Convolution as inner product has been the founding basis of convolutional neural networks (CNNs) and the key to end-to-end visual representation learning. Benefiting from deeper architectures, recent CNNs have demonstrated increasingly strong representation abilities. Despite such improvement, the increased depth and larger parameter space have also led to challenges in properly training a network. In light of such challenges, we propose hyperspherical convolution (SphereConv), a novel learning framework that gives angular representations on hyperspheres. We introduce SphereNet, deep hyperspherical convolution networks that are distinct from conventional inner product based convolutional networks. In particular, SphereNet adopts SphereConv as its basic convolution operator and is supervised by generalized angular softmax loss - a natural loss formulation under SphereConv. We show that SphereNet can effectively encode discriminative representation and alleviate training difficulty, leading to easier optimization, faster convergence and better classification performance over convolutional counterparts. We also provide some theoretical justifications for the advantages on hyperspherical optimization. Experiments and ablation studies have verified our conclusion.

Author Information

Weiyang Liu (Georgia Tech)
Yan-Ming Zhang (Institute of Automation, Chinese Academy of Sciences)
Xingguo Li (Princeton University)
Zhiding Yu (Carnegie Mellon University)
Bo Dai (Georgia Tech)
Tuo Zhao (Georgia Tech)
Le Song (Ant Financial & Georgia Institute of Technology)

Related Events (a corresponding poster, oral, or spotlight)

More from the Same Authors