Timezone: »
The class imbalance problem, as an important issue in learning node representations, has drawn increasing attention from the community. Although the imbalance considered by existing studies roots from the unequal quantity of labeled examples in different classes (quantity imbalance), we argue that graph data expose a unique source of imbalance from the asymmetric topological properties of the labeled nodes, i.e., labeled nodes are not equal in terms of their structural role in the graph (topology imbalance). In this work, we first probe the previously unknown topology-imbalance issue, including its characteristics, causes, and threats to semisupervised node classification learning. We then provide a unified view to jointly analyzing the quantity- and topology- imbalance issues by considering the node influence shift phenomenon with the Label Propagation algorithm. In light of our analysis, we devise an influence conflict detection–based metric Totoro to measure the degree of graph topology imbalance and propose a model-agnostic method ReNode to address the topology-imbalance issue by re-weighting the influence of labeled nodes adaptively based on their relative positions to class boundaries. Systematic experiments demonstrate the effectiveness and generalizability of our method in relieving topology-imbalance issue and promoting semi-supervised node classification. The further analysis unveils varied sensitivity of different graph neural networks (GNNs) to topology imbalance, which may serve as a new perspective in evaluating GNN architectures.
Author Information
Deli Chen (Tencent Inc. WeChat AI)
Yankai Lin (Tencent, Wechat AI)
Guangxiang Zhao (Peking University)
Xuancheng Ren (Peking University)
Peng Li (Tencent)
Jie Zhou (WeChat AI)
Xu Sun (Peking University)
More from the Same Authors
-
2022 Poster: A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models »
Yuanxin Liu · Fandong Meng · Zheng Lin · Jiangnan Li · Peng Fu · Yanan Cao · Weiping Wang · Jie Zhou -
2022 : Gradient Knowledge Distillation for Pre-trained Language Models »
Lean Wang · Lei Li · Xu Sun -
2022 Spotlight: A Win-win Deal: Towards Sparse and Robust Pre-trained Language Models »
Yuanxin Liu · Fandong Meng · Zheng Lin · Jiangnan Li · Peng Fu · Yanan Cao · Weiping Wang · Jie Zhou -
2022 Spotlight: Lightning Talks 6A-1 »
Ziyi Wang · Nian Liu · Yaming Yang · Qilong Wang · Yuanxin Liu · Zongxin Yang · Yizhao Gao · Yanchen Deng · Dongze Lian · Nanyi Fei · Ziyu Guan · Xiao Wang · Shufeng Kong · Xumin Yu · Daquan Zhou · Yi Yang · Fandong Meng · Mingze Gao · Caihua Liu · Yongming Rao · Zheng Lin · Haoyu Lu · Zhe Wang · Jiashi Feng · Zhaolin Zhang · Deyu Bo · Xinchao Wang · Chuan Shi · Jiangnan Li · Jiangtao Xie · Jie Zhou · Zhiwu Lu · Wei Zhao · Bo An · Jiwen Lu · Peihua Li · Jian Pei · Hao Jiang · Cai Xu · Peng Fu · Qinghua Hu · Yijie Li · Weigang Lu · Yanan Cao · Jianbin Huang · Weiping Wang · Zhao Cao · Jie Zhou -
2022 : Gradient Knowledge Distillation for Pre-trained Language Models »
Lean Wang · Lei Li · Xu Sun -
2022 Poster: Retrieve, Reason, and Refine: Generating Accurate and Faithful Patient Instructions »
Fenglin Liu · Bang Yang · Chenyu You · Xian Wu · Shen Ge · Zhangdaihong Liu · Xu Sun · Yang Yang · David Clifton -
2021 : Continual Learning in Large-Scale Pre-Training »
Xu Sun -
2021 Poster: Auto-Encoding Knowledge Graph for Unsupervised Medical Report Generation »
Fenglin Liu · Chenyu You · Xian Wu · Shen Ge · Sheng wang · Xu Sun -
2020 Poster: Prophet Attention: Predicting Attention with Future Attention »
Fenglin Liu · Xuancheng Ren · Xian Wu · Shen Ge · Wei Fan · Yuexian Zou · Xu Sun -
2019 Poster: Understanding and Improving Layer Normalization »
Jingjing Xu · Xu Sun · Zhiyuan Zhang · Guangxiang Zhao · Junyang Lin -
2019 Poster: Aligning Visual Regions and Textual Concepts for Semantic-Grounded Image Representations »
Fenglin Liu · Yuanxin Liu · Xuancheng Ren · Xiaodong He · Xu Sun -
2014 Poster: Structure Regularization for Structured Prediction »
Xu Sun