Edit model card

Rho-1: Not All Tokens Are What You Need

The Rho-1 series are pretrained language models that utilize Selective Language Modeling (SLM) objectives. In math reasoning pretraining, SLM improves average few-shot accuracy on GSM8k and MATH by over 16%, achieving the baseline performance 5-10x faster.

For more details please check our github and paper.

Downloads last month
11
Safetensors
Model size
1.1B params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.