firstbacksecondback
21 Results
Workshop
|
QDyLoRA: Quantized Dynamic Low-Rank Adaptation for Efficient Large Language Model Tuning Hossein Rajabzadeh · Mojtaba Valipour · Marzieh Tahaei · HYOCK JU KWON · Ali Ghodsi · Boxing Chen · Mehdi Rezaghoizadeh |
||
Workshop
|
Bayesian low-rank adaptation for large language models Adam Yang · Maxime Robeyns · Xi Wang · Laurence Aitchison |
||
Workshop
|
Sat 7:54 |
[Paper-Oral 5] Ensemble of low-rank adapters for large language model fine-tuning Xi Wang · Laurence Aitchison · Maja Rudolph |
|
Affinity Workshop
|
Mon 8:30 |
An (unhelpful) guide to selecting the right ASR architecture for your under-resourced language Robbie Jimerson |
|
Workshop
|
Sat 9:30 |
[Paper-Oral 6] LoDA: Low-Dimensional Adaptation of Large Language Models Jing Liu · Toshiaki Koike-Akino · Perry Wang · Matthew Brand · Ye Wang · Kieran Parsons |
|
Workshop
|
Use Your INSTINCT: INSTruction optimization usIng Neural bandits Coupled with Transformers Xiaoqiang Lin · Zhaoxuan Wu · Zhongxiang Dai · Wenyang Hu · YAO SHU · See-Kiong Ng · Patrick Jaillet · Bryan Kian Hsiang Low |
||
Poster
|
Thu 15:00 |
Stable and low-precision training for large-scale vision-language models Mitchell Wortsman · Tim Dettmers · Luke Zettlemoyer · Ari Morcos · Ali Farhadi · Ludwig Schmidt |
|
Workshop
|
GeMQuAD : Generating Multilingual Question Answering Datasets from Large Language Models using Few Shot Learning Amani Namboori · Shivam Mangale · Andy Rosenbaum · Saleh Soltan |
||
Poster
|
Tue 8:45 |
How Far Can Camels Go? Exploring the State of Instruction Tuning on Open Resources Yizhong Wang · Hamish Ivison · Pradeep Dasigi · Jack Hessel · Tushar Khot · Khyathi Chandu · David Wadden · Kelsey MacMillan · Noah Smith · Iz Beltagy · Hannaneh Hajishirzi |