Workshop
|
|
Addax: Resource-Efficient Fine-Tuning of Language Models with a Combination of Forward-Backward and Forward-Only Passes
Zeman Li · Xinwei Zhang · Peilin Zhong · Yuan Deng · Vahab Mirrokni · Meisam Razaviyayn
|
|
Workshop
|
|
Addax: Resource-Efficient Fine-Tuning of Language Models with a Combination of Forward-Backward and Forward-Only Passes
Zeman Li · Xinwei Zhang · Peilin Zhong · Yuan Deng · Vahab Mirrokni · Meisam Razaviyayn
|
|
Workshop
|
|
TOU: Truncated-factorized reduction for an efficient-parameter model fine-tuning
Phuong Thi-Mai Nguyen · Minh-Son Dao · Koji Zettsu
|
|
Workshop
|
|
Accelerating Memory-Efficient LLM Training and Fine-Tuning via Tracking the Gradient Subspace
Sahar Rajabi · Sirisha Rambhatla
|
|
Workshop
|
|
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim · Wonjun Kang · Yuchen Zeng · HYUNG IL KOO · Kangwook Lee
|
|
Workshop
|
Sat 10:42
|
Parameter-Efficient Fine-Tuning of State Space Models
Kevin Galim · Jungtaek Kim · Wonjun Kang · Yuchen Zeng · HYUNG IL KOO · Kangwook Lee
|
|
Workshop
|
|
RGP: Achieving Memory-Efficient Model Fine-tuning Via Randomized Gradient Projection
Ali Saheb Pasand · Pouya Bashivan
|
|
Workshop
|
|
Memory-Efficient Large Language Model (LLM) Training and Fine-Tuning via Gradient Subspace Tracking
Sahar Rajabi · Sirisha Rambhatla
|
|
Poster
|
Fri 11:00
|
PACE: Marrying generalization in PArameter-efficient fine-tuning with Consistency rEgularization
Yao Ni · Shan Zhang · Piotr Koniusz
|
|
Workshop
|
|
Navigating Parameter Space with Geodesic Interpolation: A New Approach to Efficient Fine-Tuning
Sophia Abraham
|
|
Oral
|
Thu 10:40
|
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
Chunlin Tian · Zhan Shi · Zhijiang Guo · Li Li · Cheng-Zhong Xu
|
|
Poster
|
Thu 11:00
|
HydraLoRA: An Asymmetric LoRA Architecture for Efficient Fine-Tuning
Chunlin Tian · Zhan Shi · Zhijiang Guo · Li Li · Cheng-Zhong Xu
|
|