Poster
|
Wed 14:00
|
FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness
Tri Dao · Dan Fu · Stefano Ermon · Atri Rudra · Christopher Ré
|
|
Poster
|
Tue 14:00
|
Transformers meet Stochastic Block Models: Attention with Data-Adaptive Sparsity and Cost
Sungjun Cho · Seonwoo Min · Jinwoo Kim · Moontae Lee · Honglak Lee · Seunghoon Hong
|
|
Poster
|
Thu 9:00
|
Theoretically Provable Spiking Neural Networks
Shao-Qun Zhang · Zhi-Hua Zhou
|
|
Poster
|
Tue 9:00
|
VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training
Zhan Tong · Yibing Song · Jue Wang · Limin Wang
|
|
Poster
|
Tue 9:00
|
Adapting Self-Supervised Vision Transformers by Probing Attention-Conditioned Masking Consistency
Viraj Prabhu · Sriram Yenamandra · Aaditya Singh · Judy Hoffman
|
|
Poster
|
|
Efficient Multi-agent Communication via Self-supervised Information Aggregation
Cong Guan · Feng Chen · Lei Yuan · Chenghe Wang · Hao Yin · Zongzhang Zhang · Yang Yu
|
|
Poster
|
Thu 9:00
|
Rethinking and Scaling Up Graph Contrastive Learning: An Extremely Efficient Approach with Group Discrimination
YIZHEN ZHENG · Shirui Pan · Vincent CS Lee · Yu Zheng · Philip S Yu
|
|
Workshop
|
|
Bounded logit attention: Learning to explain image classifiers
Thomas Baumhauer · Djordje Slijepcevic · Matthias Zeppelzauer
|
|
Poster
|
Wed 14:00
|
Losses Can Be Blessings: Routing Self-Supervised Speech Representations Towards Efficient Multilingual and Multitask Speech Processing
Yonggan Fu · Yang Zhang · Kaizhi Qian · Zhifan Ye · Zhongzhi Yu · Cheng-I Jeff Lai · Celine Lin
|
|