Workshop
|
Fri 12:15
|
Reusing pretrained models for learning and unlearning tasks
|
|
Workshop
|
|
Backtracking Mathematical Reasoning of Language Models to the Pretraining Data
Yasaman Razeghi · Hamish Ivison · Sameer Singh · Yanai Elazar
|
|
Workshop
|
|
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
ZUXIN LIU · Jesse Zhang · Kavosh Asadi · Yao Liu · DING ZHAO · Shoham Sabach · Rasool Fakoor
|
|
Workshop
|
|
Detecting Pretraining Data from Large Language Models
Weijia Shi · Anirudh Ajith · Mengzhou Xia · Yangsibo Huang · Daogao Liu · Terra Blevins · Danqi Chen · Luke Zettlemoyer
|
|
Workshop
|
|
Irreducible Curriculum for Language Model Pretraining
Simin Fan · Martin Jaggi
|
|
Workshop
|
|
TAIL: Task-specific Adapters for Imitation Learning with Large Pretrained Models
ZUXIN LIU · Jesse Zhang · Kavosh Asadi · Yao Liu · DING ZHAO · Shoham Sabach · Rasool Fakoor
|
|
Workshop
|
|
An Emulator for Fine-tuning Large Language Models using Small Language Models
Eric Mitchell · Rafael Rafailov · Archit Sharma · Chelsea Finn · Christopher D Manning
|
|
Workshop
|
|
ExPT: Scaling Foundation Models for Experimental Design via Synthetic Pretraining
Tung Nguyen · Sudhanshu Agrawal · Aditya Grover
|
|
Workshop
|
|
ExPT: Scaling Foundation Models for Experimental Design via Synthetic Pretraining
Tung Nguyen · Sudhanshu Agrawal · Aditya Grover
|
|
Workshop
|
|
ExPT: Scaling Foundation Models for Experimental Design via Synthetic Pretraining
Tung Nguyen · Sudhanshu Agrawal · Aditya Grover
|
|
Workshop
|
|
Can Transformer Models Generalize Via In-Context Learning Beyond Pretraining Data?
Steve Yadlowsky · Lyric Doshi · Nilesh Tripuraneni
|
|
Workshop
|
|
DOGE: Domain Reweighting with Generalization Estimation
Simin Fan · Matteo Pagliardini · Martin Jaggi
|
|