Poster
|
Tue 8:45
|
Primal-Attention: Self-attention through Asymmetric Kernel SVD in Primal Representation
Yingyi Chen · Qinghua Tao · Francesco Tonin · Johan Suykens
|
|
Affinity Workshop
|
Mon 13:30
|
Enhanced Text Extraction using Multi-modal Attention Mechanism for Text Visual Question Answering System
Esther Oduntan
|
|
Poster
|
Wed 8:45
|
Hierarchically Gated Recurrent Neural Network for Sequence Modeling
Zhen Qin · Songlin Yang · Yiran Zhong
|
|
Workshop
|
|
Recursive Joint Cross-Attention for Audio-Visual Speaker Verification
Gnana Praveen Rajasekhar · JAHANGIR ALAM
|
|
Poster
|
Thu 15:00
|
Quantizable Transformers: Removing Outliers by Helping Attention Heads Do Nothing
Yelysei Bondarenko · Markus Nagel · Tijmen Blankevoort
|
|
Poster
|
Wed 8:45
|
Causal Interpretation of Self-Attention in Pre-Trained Transformers
Raanan Rohekar · Yaniv Gurwicz · Shami Nisimov
|
|
Poster
|
Wed 15:00
|
On Separate Normalization in Self-supervised Transformers
Xiaohui Chen · Yinkai Wang · Yuanqi Du · Soha Hassoun · Liping Liu
|
|
Poster
|
Wed 15:00
|
Max-Margin Token Selection in Attention Mechanism
Davoud Ataee Tarzanagh · Yingcong Li · Xuechen Zhang · Samet Oymak
|
|
Poster
|
Tue 15:15
|
Geometric Transformer with Interatomic Positional Encoding
Yusong Wang · Shaoning Li · Tong Wang · Bin Shao · Nanning Zheng · Tie-Yan Liu
|
|
Poster
|
Wed 8:45
|
Attentive Transfer Entropy to Exploit Transient Emergence of Coupling Effect
Xiaolei Ru · XINYA ZHANG · Zijia Liu · Jack Murdoch Moore · Gang Yan
|
|
Poster
|
Wed 8:45
|
Blockwise Parallel Transformers for Large Context Models
Hao Liu · Pieter Abbeel
|
|
Poster
|
Tue 8:45
|
Training-free Diffusion Model Adaptation for Variable-Sized Text-to-Image Synthesis
Zhiyu Jin · Xuli Shen · Bin Li · Xiangyang Xue
|
|