--- language: - en pipeline_tag: text-generation base_model: - meta-llama/Llama-3.1-8B-Instruct - ValiantLabs/Llama3.1-8B-ShiningValiant2 - ValiantLabs/Llama3.1-8B-Cobalt library_name: transformers model_type: llama model-index: - name: sequelbox/Llama3.1-8B-PlumMath results: - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-Shot) type: Winogrande args: num_few_shot: 5 metrics: - type: acc value: 72.38 name: acc - task: type: text-generation name: Text Generation dataset: name: MathQA (5-Shot) type: MathQA args: num_few_shot: 5 metrics: - type: acc value: 40.27 name: acc tags: - mergekit - merge - shining-valiant - shining-valiant-2 - cobalt - plum - valiant - valiant-labs - llama - llama-3.1 - llama-3.1-instruct - llama-3.1-instruct-8b - llama-3 - llama-3-instruct - llama-3-instruct-8b - 8b - math - math-instruct - science - physics - biology - chemistry - compsci - computer-science - engineering - technical - conversational - chat - instruct license: llama3.1 --- # PlumMath This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the della merge method using [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) as a base. ### Models Merged The following models were included in the merge: * [ValiantLabs/Llama3.1-8B-ShiningValiant2](https://huggingface.co/ValiantLabs/Llama3.1-8B-ShiningValiant2) * [ValiantLabs/Llama3.1-8B-Cobalt](https://huggingface.co/ValiantLabs/Llama3.1-8B-Cobalt) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: della dtype: bfloat16 parameters: normalize: true models: - model: ValiantLabs/Llama3.1-8B-ShiningValiant2 parameters: density: 0.5 weight: 0.3 - model: ValiantLabs/Llama3.1-8B-Cobalt parameters: density: 0.5 weight: 0.2 base_model: meta-llama/Llama-3.1-8B-Instruct ```