00:00:00

Share Your Feedback 🏝️

MoE | OLMoE

MoE | OLMoE

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: MemLong RAG Next: Long Context | RAG in Long-Context Language Models

MoE | OLMoE

  • Related Project: Private
  • Category: Paper Review
  • Date: 2024-09-03

OLMoE: Open Mixture-of-Experts Language Models

  • url: https://arxiv.org/abs/2409.02060
  • pdf: https://arxiv.org/pdf/2409.02060
  • html: https://arxiv.org/html/2409.02060v1
  • abstract: We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE training, analyze routing in our model showing high specialization, and open-source all aspects of our work: model weights, training data, code, and logs.

Previous: MemLong RAG Next: Long Context | RAG in Long-Context Language Models

post contain ""

    No matching posts found containing ""