Skip to yearly menu bar Skip to main content


Poster

Graph Convolutions Enrich the Self-Attention in Transformers!

Jeongwhan Choi · Hyowon Wi · Jayoung Kim · Yehjin Shin · Kookjin Lee · Nathaniel Trask · Noseong Park

East Exhibit Hall A-C #2100
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Transformers, renowned for their self-attention mechanism, have achieved state-of-the-art performance across various tasks in natural language processing, computer vision, time-series modeling, etc. However, one of the challenges with deep Transformer models is the oversmoothing problem, where representations across layers converge to indistinguishable values, leading to significant performance degradation. We interpret the original self-attention as a simple graph filter and redesign it from a graph signal processing (GSP) perspective. We propose a graph-filter-based self-attention (GFSA) to learn a general yet effective one, whose complexity, however, is slightly larger than that of the original self-attention mechanism. We demonstrate that GFSA improves the performance of Transformers in various fields, including computer vision, natural language processing, graph-level tasks, speech recognition, and code classification.

Live content is unavailable. Log in and register to view live content