emozilla's picture
Create README.md
c7b2bc8
|
raw
history blame
1.99 kB
metadata
datasets:
  - emozilla/yarn-train-tokenized-16k-mistral
metrics:
  - perplexity
library_name: transformers

Model Card: Nous-Yarn-Mistral-7b-128k

Preprint (arXiv)
GitHub yarn

Model Description

Nous-Yarn-Mistral-7b-128k is a state-of-the-art language model for long context, further pretrained on long context data for 1500 steps using the YaRN extension method. It is an extension of Mistral-7B-v0.1 and supports a 128k token context window.

To use, pass trust_remote_code=True when loading the model, for example

model = AutoModelForCausalLM.from_pretrained("NousResearch/Yarn-Mistral-7b-128k",
  use_flash_attention_2=True,
  torch_dtype=torch.bfloat16,
  device_map="auto",
  trust_remote_code=True)

Benchmarks

Model Context Window ARC-c Hellaswag MMLU Truthful QA
Mistral-7B-v0.1 8K 59.98 83.31 64.16 42.15
Yarn-Mistral-7b-64k 64K 59.38 81.21 61.32 42.50
Yarn-Mistral-7b-128k 128K 58.87 80.58 60.64 42.46

Collaborators

The authors would like to thank LAION AI for their support of compute for this model. It was trained on the JUWELS supercomputer.