---
inference: false
language: en
license: llama2
model_type: llama
datasets:
- mlabonne/CodeLlama-2-20k
pipeline_tag: text-generation
base_model:
- davzoku/cria-llama2-7b-v1.3
library_name: transformers
tags:
- mergekit
- merge
- llama-2
---
# FrankenCRIA v1.3-m.1
## What is FrankenCRIA?
This is a frankenmerge of davzoku/cria-llama2-7b-v1.3.
The configuration is the same as [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1), [mlabonne/FrankenBeagle14-11B](https://huggingface.co/mlabonne/FrankenBeagle14-11B) and the DUS technique used in [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0).
Please be aware that this model is highly experimental, and no further training has been conducted following the merge.
Therefore, the model performance may not meet expectations, as described in the [SOLAR paper](https://arxiv.org/abs/2312.15166)
## 📦 FrankenCRIA Model Release
FrankenCRIA v1.3 comes with several variants.
- [davzoku/frankencria-llama2-11b-v1.3-m.1](https://huggingface.co/davzoku/frankencria-llama2-11b-v1.3-m.1): 11B FrankenMerge inspired by [Undi95/Mistral-11B-v0.1](https://huggingface.co/Undi95/Mistral-11B-v0.1)
- [davzoku/frankencria-llama2-11b-v1.3-m.2](https://huggingface.co/davzoku/frankencria-llama2-12.5b-v1.3-m.2): 12.5B interleaving FrankenMerge inspired by [vilm/vinallama-12.5b-chat-DUS](https://huggingface.co/vilm/vinallama-12.5b-chat-DUS)
## 🧩 Merge Details
### Merge Method
This model was merged using the passthrough merge method.
### Models Merged
The following models were included in the merge:
* [davzoku/cria-llama2-7b-v1.3](https://huggingface.co/davzoku/cria-llama2-7b-v1.3)
### Configuration
The following YAML configuration was used to produce this model.
```yaml
# https://huggingface.co/Undi95/Mistral-11B-v0.1
slices:
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [0, 24]
- sources:
- model: davzoku/cria-llama2-7b-v1.3
layer_range: [8, 32]
merge_method: passthrough
dtype: bfloat16
```