Edit model card

Model Card for una-cybertron-7b-v1 (UNA: Uniform Neural Alignment)

We strike back, introducing Cybertron 7B v1 a 7B MistralAI based model, best on it's series. Trained on SFT, DPO and UNA (Unified Neural Alignment) on multiple datasets. He scores 64.60+ on HF LeaderTests (without DROP for now).

Scoring #1 at 2 December 2023:

Model Average ARC (25-s) HellaSwag (10-s) MMLU (5-s) TruthfulQA (MC) (0-s) Winogrande (5-s) GSM8K (5-s)
mistralai/Mistral-7B-v0.1 60.97 59.98 83.31 64.16 42.15 78.37 37.83
perlthoughts/Chupacabra-7B-v2 63.54 66.47 85.17 64.49 57.6 79.16 28.35
fblgit/una-cybertron-7b-v1 64.60 68.17 85.14 62.07 63.98 80.9 27.34

The model excels in mathematics, logic, reasoning, overall very smart.

Model Details

Adiestrated with UNA: Uniform Neural Alignment technique (paper going out soon).

Model Description

  • Developed by: juanako.ai
  • Author: Xavier M.
  • Model type: MistralAI 7B
  • Funded by Cybertron's H100's

Prompt

The model is very good, works well on almost any prompt but ChatML format and Alpaca System gets the best

<|im_start|>system
- You are a helpful assistant chatbot trained by MosaicML.
- You answer questions.
- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>
<|im_start|>user
Explain QKV<|im_end|>
<|im_start|>assistant
### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!

### Human: Explain QKV
### Assistant:
[Round <|round|>]
问:Explain QKV
答:
[Round <|round|>]
Question:Explain QKV
Answer:
Question:Explain QKV
Answer:

Evaluation

|    Tasks     |Version|Shots | Metric |Value |   |Stderr|
|--------------|-------|------|--------|-----:|---|-----:|
|arc_challenge |       | 25   |acc_norm|0.6817|±  |0.0136|
|truthfulqa_mc2|       | 0    |acc     |0.6398|±  |0.0151|
|hellaswag     |       | 10   |acc_norm|0.8492|±  |0.0036|
|winogrande    |       | 0    |acc     |0.809 |±  |0.011 |
|gsm8k         |       | 5    |acc     |0.2733|±  |0.0137|
|mmlu          |       | 5    |acc     |0.6207|±  |0.1230|
|              |average|      |acc     |0.6456|   |      |

|      Groups      |Version|Filter|n-shot|Metric|Value |   |Stderr|
|------------------|-------|------|-----:|------|-----:|---|-----:|
|mmlu              |N/A    |none  |     0|acc   |0.6207|_  |0.1230|
| - humanities     |N/A    |none  |     5|acc   |0.5675|_  |0.1125|
| - other          |N/A    |none  |     5|acc   |0.6933|_  |0.1108|
| - social_sciences|N/A    |none  |     5|acc   |0.7270|_  |0.0666|
| - stem           |N/A    |none  |     5|acc   |0.5249|_  |0.1311|

Framework versions

  • Transformers 4.35.0-UNA
  • Pytorch 2.1.0
  • Datasets 2.14.6
  • Tokenizers 0.14.1

Citations

If you find Cybertron, Juanako or any of our models useful, specially if you use it for your big brand.. cite please:

@misc{unacybertron7a,
  title={Cybertron: Uniform Neural Alignment}, 
  author={Xavier Murias},
  year={2023},
  publisher = {HuggingFace},
  journal = {HuggingFace repository},
  howpublished = {\url{https://huggingface.co/fblgit/una-cybertron-7b-v1}},
}

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 69.49
AI2 Reasoning Challenge (25-Shot) 68.43
HellaSwag (10-Shot) 85.42
MMLU (5-Shot) 63.34
TruthfulQA (0-shot) 63.28
Winogrande (5-shot) 81.37
GSM8k (5-shot) 55.12
Downloads last month
771
Safetensors
Model size
7.24B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for fblgit/una-cybertron-7b-v1-fp16

Quantizations
1 model

Datasets used to train fblgit/una-cybertron-7b-v1-fp16

Spaces using fblgit/una-cybertron-7b-v1-fp16 6

Collection including fblgit/una-cybertron-7b-v1-fp16

Evaluation results