metadata
base_model:
- 152334H/miqu-1-70b-sf
- lizpreciatior/lzlv_70b_fp16_hf
language:
- en
- de
- fr
- es
- it
library_name: transformers
tags:
- mergekit
- merge
miquliz-120b
- EXL2: 2.4bpw | 2.65bpw | 2.9bpw | 4.0bpw
- GGUF: IQ3_XXS | Q4_K_S+Q4_K_M
- HF: wolfram/miquliz-120b
This is a 120b frankenmerge created by interleaving layers of miqu-1-70b-sf with lzlv_70b_fp16_hf using mergekit.
Inspired by goliath-120b.
Thanks for the support, CopilotKit - the open-source platform for building in-app AI Copilots into any product, with any LLM model. Check out their GitHub.
Thanks for the EXL2 and GGUF quants, Lone Striker and NanoByte!
Prompt template: Mistral
<s>[INST] {prompt} [/INST]
Model Details
- Max Context: 32768 tokens
- Layers: 137
Merge Details
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
dtype: float16
merge_method: passthrough
slices:
- sources:
- layer_range: [0, 16]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [8, 24]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [17, 32]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [25, 40]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [33, 48]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [41, 56]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [49, 64]
model: 152334H/miqu-1-70b-sf
- sources:
- layer_range: [57, 72]
model: lizpreciatior/lzlv_70b_fp16_hf
- sources:
- layer_range: [65, 80]
model: 152334H/miqu-1-70b-sf
Credits & Special Thanks
- 1st model:
- original (unreleased) model: mistralai (Mistral AI_)
- leaked model: miqudev/miqu-1-70b
- f16 model: 152334H/miqu-1-70b-sf
- 2nd model: lizpreciatior/lzlv_70b_fp16_hf
- mergekit: arcee-ai/mergekit: Tools for merging pretrained large language models.
- mergekit_config.yml: alpindale/goliath-120b
Support
- My Ko-fi page if you'd like to tip me to say thanks or request specific models to be tested or merged with priority. Also consider supporting your favorite model creators, quantizers, or frontend/backend devs if you can afford to do so. They deserve it!