Invited Talk
in
Competition: Edge-LLMs: Edge-Device Large Language Model Competition
Invited Spearker: Tianqi Chen
Tianqi Chen
In this talk, we will discuss the lessons learned in building an efficient large language model deployment system for both server and edge settings. We will cover general techniques in machine learning compilation and system support for efficient structure generation. We will also discuss the future opportunities in system co-design for cloud-edge model deployments.
Bio: Tianqi Chen is currently an Assistant Professor at the Machine Learning Department and Computer Science Department of Carnegie Mellon University. He is also a distinguished engineer at NVIDIA. He received his PhD. from the Paul G. Allen School of Computer Science & Engineering at the University of Washington. He has created many major learning systems that are widely adopted: XGBoost, Apache TVM, and MLC-LLM.
Live content is unavailable. Log in and register to view live content