Skip to yearly menu bar Skip to main content


Search All 2024 Events
 

156 Results

<<   <   Page 1 of 13   >   >>
Poster
Wed 16:30 Graph Convolutions Enrich the Self-Attention in Transformers!
Jeongwhan Choi · Hyowon Wi · Jayoung Kim · Yehjin Shin · Kookjin Lee · Nathaniel Trask · Noseong Park
Poster
Wed 16:30 Unveiling the Hidden Structure of Self-Attention via Kernel Principal Component Analysis
Rachel S.Y. Teo · Tan Nguyen
Poster
Thu 16:30 Train-Attention: Meta-Learning Where to Focus in Continual Knowledge Learning
Seo Yeongbin · Dongha Lee · Jinyoung Yeo
Poster
Fri 11:00 Perceiving Longer Sequences With Bi-Directional Cross-Attention Transformers
Markus Hiller · Krista A. Ehinger · Tom Drummond
Poster
Wed 16:30 Masked Hard-Attention Transformers Recognize Exactly the Star-Free Languages
Andy Yang · David Chiang · Dana Angluin
Poster
Fri 11:00 StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation
Yupeng Zhou · Daquan Zhou · Ming-Ming Cheng · Jiashi Feng · Qibin Hou
Poster
Fri 16:30 Aligner-Encoders: Self-Attention Transformers Can Be Self-Transducers
Adam Stooke · Rohit Prabhavalkar · Khe Sim · Pedro Moreno Mengibar
Poster
Thu 16:30 Are Self-Attentions Effective for Time Series Forecasting?
Dongbin Kim · Jinseong Park · Jaewook Lee · Hoki Kim
Affinity Event
Automated Localization & Segmentation of Brain Tumors Using Attention-Enhanced U-Net
Somayeh Davar · Thomas Fevens
Poster
Fri 11:00 Activating Self-Attention for Multi-Scene Absolute Pose Regression
Miso Lee · Jihwan Kim · Jae-Pil Heo
Poster
Fri 16:30 NoMAD-Attention: Efficient LLM Inference on CPUs Through Multiply-add-free Attention
Tianyi Zhang · Jonah Yi · Bowen Yao · Zhaozhuo Xu · Anshumali Shrivastava
Poster
Wed 16:30 AsCAN: Asymmetric Convolution-Attention Networks for Efficient Recognition and Generation
Anil Kag · n n · Jierun Chen · Junli Cao · Willi Menapace · Aliaksandr Siarohin · Sergey Tulyakov · Jian Ren