Quantization made by Richard Erkhov.
Prima-LelantaclesV5-7b - GGUF
- Model creator: https://huggingface.co/ChaoticNeutrals/
- Original model: https://huggingface.co/ChaoticNeutrals/Prima-LelantaclesV5-7b/
Original model description:
license: other library_name: transformers tags: - mergekit - merge base_model: - Test157t/Pasta-Lake-7b - Test157t/Prima-LelantaclesV4-7b-16k model-index: - name: Prima-LelantaclesV5-7b results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 70.65 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 87.87 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.52 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 68.26 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 82.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 64.82 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ChaoticNeutrals/Prima-LelantaclesV5-7b name: Open LLM Leaderboard
Update: Getting suprisingly good results at 16384 context, which is unexpected given this context pool should remain untouched from other mistral models working around 8192.
Thanks to @Lewdiculus for the Quants: https://huggingface.co/Lewdiculous/Prima-LelantaclesV5-7b-GGUF
This model was merged using the DARE TIES merge method.
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: dare_ties
base_model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
normalize: true
models:
- model: Test157t/Pasta-Lake-7b
parameters:
weight: 1
- model: Test157t/Prima-LelantaclesV4-7b-16k
parameters:
weight: 1
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 73.09 |
AI2 Reasoning Challenge (25-Shot) | 70.65 |
HellaSwag (10-Shot) | 87.87 |
MMLU (5-Shot) | 64.52 |
TruthfulQA (0-shot) | 68.26 |
Winogrande (5-shot) | 82.40 |
GSM8k (5-shot) | 64.82 |