Poster
|
Tue 9:00
|
What Makes a "Good" Data Augmentation in Knowledge Distillation - A Statistical Perspective
Huan Wang · Suhas Lohit · Michael Jones · Yun Fu
|
|
Workshop
|
|
Reducing Forgetting in Federated Learning with Truncated Cross-Entropy
Gwen Legate · Lucas Page-Caccia · Eugene Belilovsky
|
|
Workshop
|
|
On the Implicit Geometry of Cross-Entropy Parameterizations for Label-Imbalanced Data
Tina Behnia · Ganesh Ramachandra Kini · Vala Vakilian · Christos Thrampoulidis
|
|
Poster
|
Thu 14:00
|
A Simple Decentralized Cross-Entropy Method
Zichen Zhang · Jun Jin · Martin Jagersand · Jun Luo · Dale Schuurmans
|
|
Workshop
|
|
Constructing Memory: Consolidation as Teacher-Student Training of a Generative Model
Eleanor Spens · Neil Burgess
|
|
Poster
|
Thu 9:00
|
An Analytical Theory of Curriculum Learning in Teacher-Student Networks
Luca Saglietti · Stefano Mannelli · Andrew Saxe
|
|
Poster
|
Tue 9:00
|
Knowledge Distillation: Bad Models Can Be Good Role Models
Gal Kaplun · Eran Malach · Preetum Nakkiran · Shai Shalev-Shwartz
|
|
Poster
|
Thu 14:00
|
Training Spiking Neural Networks with Local Tandem Learning
Qu Yang · Jibin Wu · Malu Zhang · Yansong Chua · Xinchao Wang · Haizhou Li
|
|
Workshop
|
|
RoTaR: Efficient Row-Based Table Representation Learning via Teacher-Student Training (Short Paper)
Zui Chen · Lei Cao · Samuel Madden
|
|
Workshop
|
|
Honest Students from Untrusted Teachers: Learning an Interpretable Question-Answering Pipeline from a Pretrained Language Model
Jacob Eisenstein · Daniel Andor · Bernd Bohnet · Michael Collins · David Mimno
|
|
Poster
|
Tue 9:00
|
Efficient Risk-Averse Reinforcement Learning
Ido Greenberg · Yinlam Chow · Mohammad Ghavamzadeh · Shie Mannor
|
|