OLMoE: Open Mixture-of-Experts Language Models
Niklas Muennighoff ⋅ Luca Soldaini ⋅ Dirk Groeneveld ⋅ Kyle Lo ⋅ Jacob Morrison ⋅ Sewon Min ⋅ Weijia Shi ⋅ Evan Walsh ⋅ Oyvind Tafjord ⋅ Nathan Lambert ⋅ Yuling Gu ⋅ Shane Arora ⋅ Akshita Bhagia ⋅ Dustin Schwenk ⋅ David Wadden ⋅ Alexander Wettig ⋅ Binyuan Hui ⋅ Tim Dettmers ⋅ Douwe Kiela ⋅ Noah Smith ⋅ Pang Wei Koh ⋅ Amanpreet Singh ⋅ Hannaneh Hajishirzi
Keywords:
Efficient Training
2024 Oral
in
Workshop: The Fourth Workshop on Efficient Natural Language and Speech Processing (ENLSP-IV): Highlighting New Architectures for Future Foundation Models
in
Workshop: The Fourth Workshop on Efficient Natural Language and Speech Processing (ENLSP-IV): Highlighting New Architectures for Future Foundation Models
Abstract
We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2- 13B-Chat and DeepSeekMoE-16B. We present novel findings on MoE training, define and analyze new routing properties showing high specialization in our model, and open-source all our work: model weights, training data, code, and logs.
Video
Chat is not available.
Successful Page Load