--- base_model: - ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 - rAIfle/Acolyte-22B - byroneverson/Mistral-Small-Instruct-2409-abliterated - DazzlingXeno/Cydonian-Gutenberg library_name: transformers tags: - mergekit - merge --- ## Karasik 0.1 ### Overview Somewhat experimental merge of some Mistral Small models. ### Quants [Static](https://huggingface.co/mradermacher/Karasik-22B-v0.1-GGUF) [Imatrix](https://huggingface.co/mradermacher/Karasik-22B-v0.1-i1-GGUF) ## Merge Details ### Merge Method This model was merged using the della_linear merge method using [ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) as a base. ### Models Merged The following models were included in the merge: * [rAIfle/Acolyte-22B](https://huggingface.co/rAIfle/Acolyte-22B) * [byroneverson/Mistral-Small-Instruct-2409-abliterated](https://huggingface.co/byroneverson/Mistral-Small-Instruct-2409-abliterated) * [DazzlingXeno/Cydonian-Gutenberg](https://huggingface.co/DazzlingXeno/Cydonian-Gutenberg) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 parameters: epsilon: 0.04 lambda: 1.05 int8_mask: true rescale: true normalize: false dtype: bfloat16 tokenizer_source: base merge_method: della_linear models: - model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 parameters: weight: [0.2, 0.3, 0.2, 0.3, 0.2] density: [0.45, 0.55, 0.45, 0.55, 0.45] - model: rAIfle/Acolyte-22B parameters: weight: [0.01768, -0.01675, 0.01285, -0.01696, 0.01421] density: [0.6, 0.4, 0.5, 0.4, 0.6] - model: byroneverson/Mistral-Small-Instruct-2409-abliterated parameters: weight: [0.208, 0.139, 0.139, 0.139, 0.208] density: [0.7] - model: DazzlingXeno/Cydonian-Gutenberg parameters: weight: [0.33] density: [0.45, 0.55, 0.45, 0.55, 0.45] ```