Poster
|
|
ResT V2: Simpler, Faster and Stronger
Qinglong Zhang · Yu-Bin Yang
|
|
Poster
|
Wed 14:00
|
HUMUS-Net: Hybrid Unrolled Multi-scale Network Architecture for Accelerated MRI Reconstruction
Zalan Fabian · Berk Tinaz · Mahdi Soltanolkotabi
|
|
Poster
|
Wed 9:00
|
Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning
Dongze Lian · Daquan Zhou · Jiashi Feng · Xinchao Wang
|
|
Workshop
|
|
Cortical Transformers: Robustness and Model Compression with Multi-Scale Connectivity Properties of the Neocortex.
Brian Robinson · Nathan Drenkow
|
|
Workshop
|
|
ViT-DD: Multi-Task Vision Transformer for Semi-Supervised Driver Distraction Detection
Yunsheng Ma · Ziran Wang
|
|
Affinity Workshop
|
|
Set2Set Transformer: Towards End-to-End 3D Object Detection from Point Clouds
Yeabsira Tessema · Abel Mekonnen · Michael Desta · Selameab Demilew
|
|
Affinity Workshop
|
|
DynamicViT: Faster Vision Transformer
Amanuel Mersha · Samuel Assefa
|
|
Poster
|
|
VTC-LFC: Vision Transformer Compression with Low-Frequency Components
Zhenyu Wang · Hao Luo · Pichao WANG · Feng Ding · Fan Wang · Hao Li
|
|
Workshop
|
|
Training a Vision Transformer from scratch in less than 24 hours with 1 GPU
Saghar Irandoust · Thibaut Durand · Yunduz Rakhmangulova · Wenjie Zi · Hossein Hajimirsadeghi
|
|
Workshop
|
Fri 1:40
|
DynamicViT: Making Vision Transformer faster through layer skipping
Amanuel Mersha · Samuel Assefa
|
|
Poster
|
Tue 9:00
|
Scalable and Efficient Training of Large Convolutional Neural Networks with Differential Privacy
Zhiqi Bu · Jialin Mao · Shiyun Xu
|
|
Poster
|
Wed 9:00
|
Understanding and Improving Robustness of Vision Transformers through Patch-based Negative Augmentation
Yao Qin · Chiyuan Zhang · Ting Chen · Balaji Lakshminarayanan · Alex Beutel · Xuezhi Wang
|
|