Edit model card

QuantFactory/L3-Deluxe-Scrambled-Eggs-On-Toast-8B-GGUF

This is quantized version of Casual-Autopsy/L3-Deluxe-Scrambled-Eggs-On-Toast-8B created using llama.cpp

Original Model Card

L3-Deluxe-Scrambled-Eggs-On-Toast-8B

L3-Deluxe-Scrambled-Eggs-On-Toast-8B is a role-play model merger using 36 models that was made in 23 merging steps.

The goal is to create both a creative and smart model by using gradients. Each model has their own section in the gradient where they have a larger weight to promote intelligence whereas the rest of the models in the section of the gradient have a small weight to promote creativity.

The following models were used as inspiration:

Instruct Format

Llama 3

Settings/Presets

Instruct/Context

Virt-io's SillyTavern Presets is recommended.

Sampler Settings

Here are the current recommended settings for more creativity.

Top K: 60
Min P: 0.035
Rep Pen: 1.05
Rep Pen Range: 2048
Pres Pen: 0.15
Smoothing Factor: 0.25
Dyna Temp:
  Min Temp: 0.75
  Max Temp: 1.5
  Expo: 0.85

No known presets for more adherencey. Please recommend some if you can!

Quants

Weighted quants by:

Static quants by:

Secret Sauce

Models Used

L3-Scrambled-Eggs-On-Toast-8B is a merge of the following models using LazyMergekit:

YAML Configs Used

The following YAML configs were used to make this mode

Eggs-and-Bread-RP-pt.1

models:
  - model: Sao10K/L3-8B-Stheno-v3.2
  - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: openlynn/Llama-3-Soliloquy-8B-v2
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-RP-pt.2

models:
  - model: Sao10K/L3-8B-Stheno-v3.2
  - model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: openlynn/Llama-3-Soliloquy-8B-v2
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Egg-and-Bread-RP

models:
  - model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-RP-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Eggs-and-Bread-IQ-pt.1

models:
  - model: NousResearch/Meta-Llama-3-8B-Instruct
  - model: turboderp/llama3-turbcat-instruct-8b
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: TIGER-Lab/MAmmoTH2-8B-Plus
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: jondurbin/bagel-8b-v1.0
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-IQ-pt.2

models:
  - model: NousResearch/Meta-Llama-3-8B-Instruct
  - model: turboderp/llama3-turbcat-instruct-8b
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: TIGER-Lab/MAmmoTH2-8B-Plus
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: jondurbin/bagel-8b-v1.0
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: abacusai/Llama-3-Smaug-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-IQ

models:
  - model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Eggs-and-Bread-Uncen-pt.1

models:
  - model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
  - model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: lodrick-the-lafted/Limon-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: vicgalle/Configurable-Llama-3-8B-v0.3
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Undi95/Llama3-Unholy-8B-OAS
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Undi95/Unholy-8B-DPO-OAS
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Uncen-pt.2

models:
  - model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
  - model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: lodrick-the-lafted/Limon-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: vicgalle/Configurable-Llama-3-8B-v0.3
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Undi95/Llama3-Unholy-8B-OAS
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Undi95/Unholy-8B-DPO-OAS
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Uncen

models:
  - model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Scrambled-Eggs-On-Toast-1

models:
  - model: Casual-Autopsy/Eggs-and-Bread-RP
  - model: Casual-Autopsy/Eggs-and-Bread-Uncen
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP
parameters:
  t:
    - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16

L3-Scrambled-Eggs-On-Toast-8B

models:
  - model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
  - model: Casual-Autopsy/Eggs-and-Bread-IQ
merge_method: slerp
base_model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
parameters:
  t:
    - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
dtype: bfloat16

Eggs-and-Bread-Misc1-pt.1

models:
  - model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
  - model: migtissera/Tess-2.0-Llama-3-8B
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: defog/llama-3-sqlcoder-8b
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: HPAI-BSC/Llama3-Aloe-8B-Alpha
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: maldv/llama-3-fantasy-writer-8b
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: lodrick-the-lafted/Olethros-8B
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Misc1-pt.2

models:
  - model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
  - model: migtissera/Tess-2.0-Llama-3-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: defog/llama-3-sqlcoder-8b
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: HPAI-BSC/Llama3-Aloe-8B-Alpha
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: maldv/llama-3-fantasy-writer-8b
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: lodrick-the-lafted/Olethros-8B
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Misc1

models:
  - model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Eggs-and-Bread-FFT-pt.1

models:
  - model: Magpie-Align/Llama-3-8B-ShareGPT-112K
  - model: Magpie-Align/Llama-3-8B-WildChat
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: Magpie-Align/Llama-3-8B-Tulu-330K
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Magpie-Align/Llama-3-8B-OpenHermes-243K
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Magpie-Align/Llama-3-8B-WizardLM-196K
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Magpie-Align/Llama-3-8B-Ultrachat-200K
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Magpie-Align/Llama-3-8B-ShareGPT-112K
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-FFT-pt.2

models:
  - model: Magpie-Align/Llama-3-8B-ShareGPT-112K
  - model: Magpie-Align/Llama-3-8B-WildChat
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: Magpie-Align/Llama-3-8B-Tulu-330K
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: Magpie-Align/Llama-3-8B-OpenHermes-243K
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: Magpie-Align/Llama-3-8B-WizardLM-196K
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: Magpie-Align/Llama-3-8B-Ultrachat-200K
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Magpie-Align/Llama-3-8B-ShareGPT-112K
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-FFT

models:
  - model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Eggs-and-Bread-Misc2-pt.1

models:
  - model: refuelai/Llama-3-Refueled
  - model: Danielbrdz/Barcenas-Llama3-8b-ORPO
    parameters:
      density: 0.5
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
  - model: migtissera/Llama-3-8B-Synthia-v3.5
    parameters:
      density: 0.5
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: chujiezheng/LLaMA3-iterative-DPO-final-ExPO
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: chargoddard/prometheus-2-llama-3-8b
    parameters:
      density: 0.5
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: refuelai/Llama-3-Refueled
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Misc2-pt.2

models:
  - model: refuelai/Llama-3-Refueled
  - model: Danielbrdz/Barcenas-Llama3-8b-ORPO
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
  - model: migtissera/Llama-3-8B-Synthia-v3.5
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
  - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
  - model: chujiezheng/LLaMA3-iterative-DPO-final-ExPO
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
  - model: chargoddard/prometheus-2-llama-3-8b
    parameters:
      gamma: 0.01
      density: 0.9
      weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: refuelai/Llama-3-Refueled
parameters:
  normalize: false
  int8_mask: true
dtype: bfloat16

Eggs-and-Bread-Misc2

models:
  - model: Casual-Autopsy/Eggs-and-Bread-Misc2-pt.1
  - model: Casual-Autopsy/Eggs-and-Bread-Misc2-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Misc2-pt.1
parameters:
  t:
    - filter: self_attn
      value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
    - filter: mlp
      value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
    - value: 0.5
dtype: bfloat16

Scrambled-Eggs-On-Toast-2

models:
  - model: Casual-Autopsy/Eggs-and-Bread-Misc1
  - model: Casual-Autopsy/Eggs-and-Bread-Misc2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Misc1
parameters:
  t:
    - value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16

Scrambled-Eggs-On-Toast-3

models:
  - model: Casual-Autopsy/Scrambled-Eggs-On-Toast-2
  - model: Casual-Autopsy/Eggs-and-Bread-FFT
merge_method: slerp
base_model: Casual-Autopsy/Scrambled-Eggs-On-Toast-2
parameters:
  t:
    - value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
dtype: bfloat16

L3-Deluxe-Scrambled-Eggs-On-Toast-8B

models:
  - model: Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B
  - model: Casual-Autopsy/Scrambled-Eggs-On-Toast-3
merge_method: slerp
base_model: Casual-Autopsy/L3-Scrambled-Eggs-On-Toast-8B
parameters:
  t:
    - value: [0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2, 0.25, 0.3, 0.4, 0.3, 0.25, 0.2]
dtype: bfloat16
Downloads last month
166
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/L3-Deluxe-Scrambled-Eggs-On-Toast-8B-GGUF