Edit model card

MixTAO-7Bx2-MoE

MixTAO-7Bx2-MoE is a Mixture of Experts (MoE). This model is mainly used for large model technology experiments, and increasingly perfect iterations will eventually create high-level large language models.

Prompt Template (Alpaca)

### Instruction:
<prompt> (without the <>)
### Response:

πŸ¦’ Colab

Link Info - Model Name
Open In Colab MixTAO-7Bx2-MoE-v8.1
mixtao-7bx2-moe-v8.1.Q4_K_M.gguf GGUF of MixTAO-7Bx2-MoE-v8.1
Only Q4_K_M in https://huggingface.co/zhengr/MixTAO-7Bx2-MoE-v8.1-GGUF
Demo Space https://huggingface.co/spaces/zhengr/MixTAO-7Bx2-MoE-v8.1/

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 77.50
AI2 Reasoning Challenge (25-Shot) 73.81
HellaSwag (10-Shot) 89.22
MMLU (5-Shot) 64.92
TruthfulQA (0-shot) 78.57
Winogrande (5-shot) 87.37
GSM8k (5-shot) 71.11
Downloads last month
8,606
Safetensors
Model size
12.9B params
Tensor type
BF16
Β·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for mixtao/MixTAO-7Bx2-MoE-v8.1

Finetunes
4 models
Merges
8 models
Quantizations
3 models

Spaces using mixtao/MixTAO-7Bx2-MoE-v8.1 2

Evaluation results