Text Generation
Transformers
Safetensors
English
olmoe
Mixture of Experts
olmo
conversational
Inference Endpoints
Muennighoff's picture
Update README.md
3fd2261 verified
|
raw
history blame
2.76 kB
metadata
license: apache-2.0
language:
  - en
tags:
  - moe
  - olmo
  - olmoe
co2_eq_emissions: 1

olmoe

Model Summary

OLMoE-1B-7B-Instruct is a Mixture-of-Experts LLM with 1B active and 7B total parameters released in August 2024 (0824) that has been adapted via SFT and DPO from OLMoE-1B-7B. It yields state-of-the-art performance among models with a similar cost (1B) and is competitive with much larger models like Llama2-13B-Chat. OLMoE is 100% open-source.

Use

Install the transformers & torch libraries and run:

from transformers import OlmoeForCausalLM, AutoTokenizer
import torch

DEVICE = "cuda" if torch.cuda.is_available() else "cpu"

# Load different ckpts via passing e.g. `revision=step10000-tokens41B`
model = OlmoeForCausalLM.from_pretrained("OLMoE/OLMoE-1B-7B-Instruct").to(DEVICE)
tokenizer = AutoTokenizer.from_pretrained("OLMoE/OLMoE-1B-7B-Instruct")
message = [{"role": "user", "content": "Explain to me like I'm five what is Bitcoin."}]
inputs = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
out = model.generate(**inputs, max_length=64)
print(tokenizer.decode(out[0]))
# > # Bitcoin is a digital currency that is created and held electronically. No one controls it. Bitcoins aren’t printed, like dollars or euros – they’re produced by people and businesses running computers all around the world, using software that solves mathematical

You can list all revisions/branches by installing huggingface-hub & running:

from huggingface_hub import list_repo_refs
out = list_repo_refs("OLMoE/OLMoE-1B-7B-0824")
branches = [b.name for b in out.branches]

Important branches:

  • step1200000-tokens5033B: Pretraining checkpoint used for annealing. There are a few more checkpoints after this one but we did not use them.
  • main: Checkpoint annealed from step1200000-tokens5033B for an additional 100B tokens (23,842 steps). We use this checkpoint for our adaptation (https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-SFT & https://huggingface.co/OLMoE/OLMoE-1B-7B-0824-Instruct).
  • fp32: FP32 version of main. The model weights were stored in FP32 during training but we did not observe any performance drop from casting them to BF16 after training so we upload all weights in BF16. If you want the original FP32 checkpoint for main you can use this one. You will find that it yields slightly different results but should perform around the same on benchmarks.

Citation

TODO