Timezone: »
Petals: Collaborative Inference and Fine-tuning of Large Models
Alexander Borzunov · Dmitry Baranchuk · Tim Dettmers · Max Ryabinin · Younes Belkada · Artem Chumachenko · Pavel Samygin · Colin Raffel
Event URL: https://openreview.net/forum?id=Ls_NTjgWXZV »
Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy significantly outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with $\approx$ 1 step per second. Unlike most inference APIs, Petals also natively exposes the hidden states of served models, allowing its users to train and share custom model extensions based on efficient fine-tuning methods.
Many NLP tasks benefit from using large language models (LLMs) that often have more than 100 billion parameters. With the release of BLOOM-176B and OPT-175B, everyone can download pretrained models of this scale. Still, using these models requires high-end hardware unavailable to many researchers. In some cases, LLMs can be used more affordably via RAM offloading or hosted APIs. However, these techniques have innate limitations: offloading is too slow for interactive inference, while APIs are not flexible enough for research. In this work, we propose Petals - a system for inference and fine-tuning of large models collaboratively by joining the resources of multiple parties. We demonstrate that this strategy significantly outperforms offloading for very large models, running inference of BLOOM-176B on consumer GPUs with $\approx$ 1 step per second. Unlike most inference APIs, Petals also natively exposes the hidden states of served models, allowing its users to train and share custom model extensions based on efficient fine-tuning methods.
Author Information
Alexander Borzunov (HSE University, Yandex)
Dmitry Baranchuk (MSU / Yandex)
Tim Dettmers (University of Washington)
Max Ryabinin (Yandex, HSE University)
Younes Belkada (Ecole Normale Superieure)
Artem Chumachenko (MynaLabs)
Pavel Samygin (Moscow Institute of Physics and Technology)
Colin Raffel (UNC Chapel Hill and Hugging Face)
Related Events (a corresponding poster, oral, or spotlight)
-
2022 : Petals: Collaborative Inference and Fine-tuning of Large Models »
Sat. Dec 3rd 05:15 -- 06:15 PM Room
More from the Same Authors
-
2021 : The impact of domain shift on the calibration of fine-tuned models »
Jay Mohta · Colin Raffel -
2022 : Models with Conditional Computation Learn Suboptimal Solutions »
Mohammed Muqeeth · Haokun Liu · Colin Raffel -
2023 Poster: Resolving Interference When Merging Models »
Prateek Yadav · Derek Tam · Leshem Choshen · Colin Raffel · Mohit Bansal -
2023 Poster: Stable and low-precision training for large-scale vision-language models »
Mitchell Wortsman · Tim Dettmers · Luke Zettlemoyer · Ari Morcos · Ali Farhadi · Ludwig Schmidt -
2023 Poster: Scaling Data-Constrained Language Models »
Niklas Muennighoff · Alexander Rush · Boaz Barak · Teven Le Scao · Nouamane Tazi · Aleksandra Piktus · Thomas Wolf · Colin Raffel · Sampo Pyysalo -
2023 Poster: Is This Loss Informative? Faster Text-to-Image Customization by Tracking Objective Dynamics »
Anton Voronov · Mikhail Khoroshikh · Artem Babenko · Max Ryabinin -
2023 Poster: Distributed Inference and Fine-tuning of Large Language Models Over The Internet »
Alexander Borzunov · Dmitry Baranchuk · Tim Dettmers · Max Ryabinin · Younes Belkada · Artem Chumachenko · Pavel Samygin · Colin Raffel -
2023 Poster: QLoRA: Efficient Finetuning of Quantized LLMs »
Tim Dettmers · Artidoro Pagnoni · Ari Holtzman · Luke Zettlemoyer -
2023 Poster: Improving Few-Shot Generalization by Exploring and Exploiting Auxiliary Data »
Alon Albalak · Colin Raffel · William Yang Wang -
2023 Oral: Scaling Data-Constrained Language Models »
Niklas Muennighoff · Alexander Rush · Boaz Barak · Teven Le Scao · Nouamane Tazi · Aleksandra Piktus · Thomas Wolf · Colin Raffel · Sampo Pyysalo -
2023 Oral: QLoRA: Efficient Finetuning of Quantized LLMs »
Tim Dettmers · Artidoro Pagnoni · Ari Holtzman · Luke Zettlemoyer -
2023 : Interactive Panel Discussion »
Tanya Roosta · Tim Dettmers · Minjia Zhang · Nazneen Rajani -
2022 Spotlight: Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees »
Aleksandr Beznosikov · Peter Richtarik · Michael Diskin · Max Ryabinin · Alexander Gasnikov -
2022 Workshop: Transfer Learning for Natural Language Processing »
Alon Albalak · Colin Raffel · Chunting Zhou · Deepak Ramachandran · Xuezhe Ma · Sebastian Ruder -
2022 : 8-bit Methods for Efficient Deep Learning »
Tim Dettmers -
2022 Poster: GPT3.int8(): 8-bit Matrix Multiplication for Transformers at Scale »
Tim Dettmers · Mike Lewis · Younes Belkada · Luke Zettlemoyer -
2022 Poster: Compositional Generalization in Unsupervised Compositional Representation Learning: A Study on Disentanglement and Emergent Language »
Zhenlin Xu · Marc Niethammer · Colin Raffel -
2022 Poster: A Combinatorial Perspective on the Optimization of Shallow ReLU Networks »
Michael S Matena · Colin Raffel -
2022 Poster: Merging Models with Fisher-Weighted Averaging »
Michael S Matena · Colin Raffel -
2022 Poster: Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning »
Haokun Liu · Derek Tam · Mohammed Muqeeth · Jay Mohta · Tenghao Huang · Mohit Bansal · Colin Raffel -
2022 Poster: Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees »
Aleksandr Beznosikov · Peter Richtarik · Michael Diskin · Max Ryabinin · Alexander Gasnikov -
2021 Poster: Distributed Deep Learning In Open Collaborations »
Michael Diskin · Alexey Bukhtiyarov · Max Ryabinin · Lucile Saulnier · quentin lhoest · Anton Sinitsin · Dmitry Popov · Dmitry V. Pyrkin · Maxim Kashirin · Alexander Borzunov · Albert Villanova del Moral · Denis Mazur · Ilia Kobelev · Yacine Jernite · Thomas Wolf · Gennady Pekhimenko -
2021 Poster: Moshpit SGD: Communication-Efficient Decentralized Training on Heterogeneous Unreliable Devices »
Max Ryabinin · Eduard Gorbunov · Vsevolod Plokhotnyuk · Gennady Pekhimenko -
2021 Poster: Scaling Ensemble Distribution Distillation to Many Classes with Proxy Targets »
Max Ryabinin · Andrey Malinin · Mark Gales -
2021 : Billion-Scale Approximate Nearest Neighbor Search Challenge + Q&A »
Harsha Vardhan Simhadri · George Williams · Martin Aumüller · Artem Babenko · Dmitry Baranchuk · Qi Chen · Matthijs Douze · Ravishankar Krishnawamy · Gopal Srinivasa · Suhas Jayaram Subramanya · Jingdong Wang -
2021 : Training Transformers Together »
Alexander Borzunov · Max Ryabinin · Tim Dettmers · quentin lhoest · Lucile Saulnier · Michael Diskin · Yacine Jernite · Thomas Wolf -
2021 Poster: Training Neural Networks with Fixed Sparse Masks »
Yi-Lin Sung · Varun Nair · Colin Raffel -
2020 : Responsible publication: NLP case study »
Miles Brundage · Bryan McCann · Colin Raffel · Natalie Schulter · Zeerak Waseem · Rosie Campbell -
2020 Poster: Towards Crowdsourced Training of Large Neural Networks using Decentralized Mixture-of-Experts »
Max Ryabinin · Anton Gusev