Edit model card

Instruction tune of Mistral-7B-v0.1 with Open-Platypus (fp16)

Overview

This is mistralai/Mistral-7B-v0.1, with instruction tuning performed with the garage-bAInd/Open-Platypus dataset.

This is a (merged) QLoRA fine-tune (rank 64).

The finetune was performed with 1x RTX 6000 Ada (~9 hours).

How to Use

As of writing, the Mistral architecture requires installation of transformers from source. With this done, load like any other model.

Benchmarks

ARC (25 shot): 62.80

Hellaswag (10 shot): 84.12

MMLU (5 shot): 64.20

Context Length - Relative Performance (wikitext perplexity)

Context (tokens) bhenrym14/mistral-7b-platypus-fp16 bhenrym14/airoboros-l2-13b-2.1-YaRN-64k bhenrym14/airophin-13b-pntk-16k-fp16 bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-fp16 jondurbin/airoboros-l2-13b-gpt4-1.4.1
512 7.22 7.64 7.62 7.90 7.23
1024 6.04 6.15 6.20 6.17 5.85
2048 5.50 5.29 5.38 5.23 5.07
4096 5.05 4.93 5.08 4.91 4.77
8192 4.96 4.69 4.90 Not Tested 57.1
12000 Not Tested 4.53 4.82 Not Tested Not Tested
  • While the mistral model is very impressive for its size, particularly on benchmarks, the sliding window attention and/or model size impacts its competitiveness with other context extension techniques applied to larger llama2 and llama variants. Is this is more to do with sliding window attention or model size?

Prompting:

Model was trained with legacy airoboros <2.0 system prompt. See bhenrym14/airoboros-33b-gpt4-1.4.1-lxctx-PI-16384-fp16 model card for details.

Downloads last month
750
Safetensors
Model size
7.24B params
Tensor type
FP16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for bhenrym14/mistral-7b-platypus-fp16

Merges
1 model
Quantizations
1 model

Dataset used to train bhenrym14/mistral-7b-platypus-fp16

Spaces using bhenrym14/mistral-7b-platypus-fp16 5