Edit model card

Finch

Finch 7b Merge

A SLERP merge of my two current fav 7B models

macadeliccc/WestLake-7B-v2-laser-truthy-dpo & SanjiWatsuki/Kunoichi-DPO-v2-7B

A set of GGUF quants of Finch

Settings

I reccomend using the ChatML format. As for samplers, I reccomend the following:

Temperature: 1.2
Min P: 0.2
Smoothing Factor: 0.2

Mergekit Config

base_model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
dtype: float16
merge_method: slerp
parameters:
  t:
  - filter: self_attn
    value: [0.0, 0.5, 0.3, 0.7, 1.0]
  - filter: mlp
    value: [1.0, 0.5, 0.7, 0.3, 0.0]
  - value: 0.5
slices:
- sources:
  - layer_range: [0, 32]
    model: macadeliccc/WestLake-7B-v2-laser-truthy-dpo
  - layer_range: [0, 32]
    model: SanjiWatsuki/Kunoichi-DPO-v2-7B
Downloads last month
8
GGUF
Model size
7.24B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for antiven0m/finch-gguf

Collection including antiven0m/finch-gguf