Edit model card
HauntingEcho-v0.1-rc2-18B

Model Details

This is a release candidate model. That means it has not been tested as extensively as my full releases and as such you should expect various issues. The primary purpose of uploading this model is to get feedback.

HauntingEcho is a passthrough merge which attempts to merge together RP-capable Llama 3 8B models with the smartest Llama 3.1 8B models.

The original aim of this merge was to use Llama 3.1 8B as a base to create a 32k context RP model with the popular feel of L3 8B RP models like EtherealRainbow. It seems like 32k context is not achievable, but 14-16k is, and that's a significant improvement over L3 8B.

The new aim of this merge strategy is to hit 16k context, to give people a slightly smarter alternative to Nemo with a Llama 3 feel and the freedom to play with the temperature that Llama 3 offers. 18B has been chosen as a very deliberate size, to offer a model that maximizes parameter count & quantization quality for popular GPU sizes: 24GB GPUs can run the 8bpw model, and 16GB GPUs can run the 5bpw model easily.

Feedback

Please, please, please, give feedback on this merge. It is simply the result of a theory and minimally tested to ensure that it doesn't completely collapse. I welcome feedback in the community section here, or on Discord (name is the same as on here). I can be found on the Chub & SillyTavern Discord servers in the local LLM channels frequently too.

Quantization Formats

As this is a release candidate, I have only provided GGUFs.

Disclaimer

This model is built on an abliterated base and as such is largely uncensored. It can generate explicit, disturbing or offensive responses. Use responsibly. I am not responsible for your use of this model.

Settings

Samplers

I'm using these sampler settings:

  • Context: 14336
  • Temperature: 0.8
  • Top P: 0.9
  • Min P: 0.08
  • Repetition Penalty: 1.11 (or DRY)
  • Rep Pen Range: 1536

These are by no means the perfect settings, feel free to experiment.

Prompting Format

I'd recommend Llama-3 Instruct prompting format:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Some of the models included in the merge were trained on ChatML & Alpaca so you can try those. I have not tested them.

Example Storywriting

N/A.

Merge Strategy

Models Used

The following models were used to create EtherealRainbow-v0.3-8B:

Mergekit Configs

L3.1-Uncensored-extended

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 24]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- sources:
  - layer_range: [8,24]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [8,24]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [8,24]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
    parameters:
      scale:
        - filter: o_proj
          value: 0.0
        - filter: down_proj
          value: 0.0
        - value: 1.0
- sources:
  - layer_range: [24, 32]
    model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2

ER03-extended

dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 24]
    model: invisietch/EtherealRainbow-v0.3-8B
- sources:
  - layer_range: [8,24]
    model: invisietch/EtherealRainbow-v0.3-8B
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [8,24]
    model: invisietch/EtherealRainbow-v0.3-8B
    parameters:
      scale:
      - filter: o_proj
        value: 0.0
      - filter: down_proj
        value: 0.0
      - value: 1.0
- sources:
  - layer_range: [8,24]
    model: invisietch/EtherealRainbow-v0.3-8B
    parameters:
      scale:
        - filter: o_proj
          value: 0.0
        - filter: down_proj
          value: 0.0
        - value: 1.0
- sources:
  - layer_range: [24, 32]
    model: invisietch/EtherealRainbow-v0.3-8B

Final Merge

dtype: bfloat16
models:
  - model: /mnt/models/L3.1-Uncensored-extended
  - model: /mnt/models/ER03-extended
merge_method: slerp
base_model: /mnt/models/L3.1-Uncensored-extended
parameters:
  t:
    - value: [0, 0, 0.25, 0.35, 0.5, 0.75, 0.5, 0.35, 0.25, 0, 0]
  embed_slerp: true
Downloads last month
32
GGUF
Model size
18.5B params
Architecture
llama

3-bit

4-bit

6-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .