Edit model card

My first merge of RP models 7B using mergekit, They are just r/ trend RP, half is BuRP_7B. not used any, Dumb merge but hopfully lucky merge! ^^'

Update 03/2024:

Nekochu

Name symbolize by Confluence for many unique RP model with Renegade mostly come from no-guardrail.

Download branch instructions

git clone --single-branch --branch Confluence-Shortcake-20B-2.4bpw-h6-exl2 https://huggingface.co/Nekochu/Confluence-Renegade-7B

Configuration Confluence-Renegade-7B

The following YAML configuration was used to produce this model:

models:
  - model: ./modela/Erosumika-7B
    parameters:
      density: [1, 0.8, 0.6]
      weight: 0.2
  - model: ./modela/Infinitely-Laydiculous-7B
    parameters:
      density: [0.9, 0.7, 0.5]
      weight: 0.2
  - model: ./modela/Kunocchini-7b-128k-test
    parameters:
      density: [0.8, 0.6, 0.4]
      weight: 0.2
  - model: ./modela/EndlessRP-v3-7B
    parameters:
      density: [0.7, 0.5, 0.3]
      weight: 0.2
  - model: ./modela/daybreak-kunoichi-2dpo-7b
    parameters:
      density: [0.5, 0.3, 0.1]
      weight: 0.2
merge_method: dare_linear
base_model: ./modela/Mistral-7B-v0.1
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
name: intermediate-model
---
slices:
  - sources:
      - model: intermediate-model
        layer_range: [0, 32]
      - model: ./modela/BuRP_7B
        layer_range: [0, 32]
merge_method: slerp
base_model: intermediate-model
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: bfloat16
name: gradient-slerp

mergekit-mega config.yml ./output-model-directory --cuda --allow-crimes --lazy-unpickle

Models Merged Confluence-Renegade-7B

The following models were included in the merge:

Downloads last month
260
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Nekochu/Confluence-Renegade-7B