Skip to yearly menu bar Skip to main content


Show Detail
Timezone: America/Los_Angeles
 
Filter Rooms:  

SAT 29 NOV
1 p.m.
(ends 4:00 PM)

SUN 30 NOV
6 a.m.
(ends 4:00 PM)
7 a.m.
Break:
(ends 7:30 AM)
10:30 a.m.
Break:
(ends 11:30 AM)
2:30 p.m.
Break:
(ends 3:30 PM)

MON 1 DEC
5 a.m.
Breakfast:
(ends 8:00 AM)
5:30 a.m.
(ends 3:00 PM)
6 a.m.
6:30 a.m.
Workshop:
(ends 1:30 PM)
8 a.m.
Break:
(ends 8:45 AM)
10 a.m.
Break:
(ends 11:00 AM)
1 p.m.
Break:
(ends 1:45 PM)

TUE 2 DEC
5 a.m.
Breakfast:
(ends 8:00 AM)
9 a.m.
(ends 3:00 PM)
Break:
(ends 10:00 AM)
noon
Break:
(ends 1:00 PM)
4 p.m.
Reception:
(ends 6:00 PM)

WED 3 DEC
5 a.m.
Breakfast:
(ends 8:00 AM)
8 a.m.
(ends 4:00 PM)
Break:
(ends 8:45 AM)
8:30 a.m.
Mexico City Invited Talk:
Rich Sutton
(ends 9:30 AM)
9:30 a.m.
Break:
(ends 10:30 AM)
10 a.m.
Orals 10:00-11:00
[10:00] Understanding and Mitigating Numerical Sources of Nondeterminism in LLM Inference
[10:20] Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
[10:40] SAVVY: Spatial Awareness via Audio-Visual LLMs through Seeing and Hearing
(ends 11:00 AM)
Orals 10:00-11:00
[10:00] Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think
[10:20] On the Closed-Form of Flow Matching: Generalization Does Not Arise from Target Stochasticity
[10:40] Why Diffusion Models Don’t Memorize: The Role of Implicit Dynamical Regularization in Training
(ends 11:00 AM)
Orals 10:00-11:00
[10:00] Optimal Mistake Bounds for Transductive Online Learning
[10:20] High-Dimensional Calibration from Swap Regret
[10:40] Does Stochastic Gradient really succeed for bandits?
(ends 11:00 AM)
Orals 10:00-11:00
[10:00] Dynam3D: Dynamic Layered 3D Tokens Empower VLM for Vision-and-Language Navigation
[10:20] Perception Encoder: The best visual embeddings are not at the output of the network
[10:40] Interactive Cross-modal Learning for Text-3D Scene Retrieval
(ends 11:00 AM)
11 a.m.
Posters 11:00-2:00
(ends 2:00 PM)
2 p.m.
Break:
(ends 2:45 PM)
Test Of Time:
(ends 2:30 PM)
2:30 p.m.
Mexico City Invited Talk:
Zeynep Tufekci
(ends 3:30 PM)
3:30 p.m.
Orals 3:30-4:30
[3:30] Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing
[3:50] CoralVQA: A Large-Scale Visual Question Answering Dataset for Coral Reef Image Understanding
[4:10] OpenHOI: Open-World Hand-Object Interaction Synthesis with Multimodal Large Language Model
(ends 4:30 PM)
Orals 3:30-4:30
[3:30] A multiscale analysis of mean-field transformers in the moderate interaction regime
[3:50] The emergence of sparse attention: impact of data distribution and benefits of repetition
[4:10] From Condensation to Rank Collapse: A Two-Stage Analysis of Transformer Training Dynamics
(ends 4:30 PM)
Orals 3:30-4:30
[3:30] PRIMT: Preference-based Reinforcement Learning with Multimodal Feedback and Trajectory Synthesis from Foundation Models
[3:50] Adaptive Surrogate Gradients for Sequential Reinforcement Learning in Spiking Neural Networks
[4:10] SAGE: A Unified Framework for Generalizable Object State Recognition with State-Action Graph Embedding
(ends 4:30 PM)
Orals 3:30-4:30
[3:30] Agnostic Active Learning Is Always Better Than Passive Learning
[3:50] Dynamical Decoupling of Generalization and Overfitting in Large Two-Layer Networks
[4:10] Tighter CMI-Based Generalization Bounds via Stochastic Projection and Quantization
(ends 4:30 PM)
4:30 p.m.
Posters 4:30-7:30
(ends 7:30 PM)

THU 4 DEC
5 a.m.
Breakfast:
(ends 8:00 AM)
8 a.m.
(ends 4:00 PM)
Break:
(ends 8:45 AM)
8:30 a.m.
Mexico City Invited Talk:
Yejin Choi
(ends 9:30 AM)
9:30 a.m.
Break:
(ends 10:30 AM)
10 a.m.
Orals 10:00-11:00
[10:00] State Entropy Regularization for Robust Reinforcement Learning
[10:20] A Clean Slate for Offline Reinforcement Learning
[10:40] Breaking the Performance Ceiling in Reinforcement Learning requires Inference Strategies
(ends 11:00 AM)
Orals 10:00-11:00
[10:00] Auto-Compressing Networks
[10:20] Dynamical Low-Rank Compression of Neural Networks with Robustness under Adversarial Attacks
[10:40] ImageNet-trained CNNs are not biased towards texture: Revisiting feature reliance through controlled suppression
(ends 11:00 AM)
Orals 10:00-11:00
[10:00] Position: If Innovation in AI systematically Violates Fundamental Rights, Is It Innovation at All?
[10:20] More effort is needed to protect pedestrian privacy in the era of AI
[10:40] Real-Time Hyper-Personalized Generative AI Should Be Regulated to Prevent the Rise of "Digital Heroin"
(ends 11:00 AM)
Orals 10:00-11:00
[10:00] ControlFusion: A Controllable Image Fusion Network with Language-Vision Degradation Prompts
[10:20] Pan-LUT: Efficient Pan-sharpening via Learnable Look-Up Tables
[10:40] FuXi-Ocean: A Global Ocean Forecasting System with Sub-Daily Resolution
(ends 11:00 AM)
11 a.m.
Posters 11:00-2:00
(ends 2:00 PM)
2 p.m.
Break:
(ends 2:45 PM)
2:30 p.m.
3:30 p.m.
Orals 3:30-4:30
[3:30] A is for Absorption: Studying Feature Splitting and Absorption in Sparse Autoencoders
[3:50] Gated Attention for Large Language Models: Non-linearity, Sparsity, and Attention-Sink-Free
[4:10] Superposition Yields Robust Neural Scaling
(ends 4:30 PM)
Orals 3:30-4:30
[3:30] Exploring Diffusion Transformer Designs via Grafting
[3:50] Deep Compositional Phase Diffusion for Long Motion Sequence Generation
[4:10] Mean Flows for One-step Generative Modeling
(ends 4:30 PM)
Orals 3:30-4:30
[3:30] In Search of Adam’s Secret Sauce
[3:50] Analog In-memory Training on General Non-ideal Resistive Elements: The Impact of Response Functions
[4:10] Generalized Gradient Norm Clipping & Non-Euclidean $(L_0,L_1)$-Smoothness
(ends 4:30 PM)
4:30 p.m.
Posters 4:30-7:30
(ends 7:30 PM)

FRI 5 DEC
5 a.m.
Breakfast:
(ends 8:00 AM)
8 a.m.
(ends 12:00 PM)
Break:
(ends 8:45 AM)
8:30 a.m.
Mexico City Invited Talk:
Kyunghyun Cho
(ends 9:30 AM)
9:30 a.m.
Break:
(ends 10:30 AM)
10 a.m.
Orals 10:00-11:00
[10:00] EvoLM: In Search of Lost Language Model Training Dynamics
[10:20] Large Language Diffusion Models
[10:40] Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model?
(ends 11:00 AM)
Orals 10:00-11:00
[10:00] Boosting Knowledge Utilization in Multimodal Large Language Models via Adaptive Logits Fusion and Attention Reallocation
[10:20] HyperET: Efficient Training in Hyperbolic Space for Multi-modal Large Language Models
[10:40] Rethinking Multimodal Learning from the Perspective of Mitigating Classification Ability Disproportion
(ends 11:00 AM)
11 a.m.
Posters 11:00-7:30
(ends 2:00 PM)
2 p.m.
Award:
(ends 2:30 PM)
Break:
(ends 2:45 PM)
2:30 p.m.
Mexico City Invited Talk:
Andrew Saxe
(ends 3:30 PM)
3:30 p.m.
Orals 3:30-4:30
[3:30] A Snapshot of Influence: A Local Data Attribution Framework for Online Reinforcement Learning
[3:50] 1000 Layer Networks for Self-Supervised RL: Scaling Depth Can Enable New Goal-Reaching Capabilities
[4:10] Learning long range dependencies through time reversal symmetry breaking
(ends 4:30 PM)
Orals 3:30-4:30
[3:30] KVzip: Query-Agnostic KV Cache Compression with Context Reconstruction
[3:50] MokA: Multimodal Low-Rank Adaptation for MLLMs
[4:10] ElasticMM: Efficient Multimodal LLMs Serving with Elastic Multimodal Parallelism
(ends 4:30 PM)
4:30 p.m.
Posters 4:30-
(ends 7:30 PM)