Skip to yearly menu bar Skip to main content


Enabling Resource-Efficient On-Device Fine-Tuning of LLMs Using Only Inference Engines

Lei Gao ⋅ Amir Ziashahabi ⋅ Yue Niu ⋅ Salman Avestimehr ⋅ Murali Annavaram
Keywords: Efficient Training

Abstract

Video

Chat is not available.