Edit model card

(Maybe i'll change the waifu picture later)

GGUF/Exl2 quants

Check for v1.15A and v1.15B

Experimental RP-oriented MoE, the idea was to get a model that would be equal to or better than Mixtral 8x7B and it's finetunes in RP/ERP tasks.

Llama 3 SnowStorm v1.0 4x8B

base_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
gate_mode: random
dtype: bfloat16
experts_per_token: 2
experts:
  - source_model: ChaoticNeutrals_Poppy_Porpoise-v0.7-L3-8B
  - source_model: NeverSleep_Llama-3-Lumimaid-8B-v0.1-OAS
  - source_model: openlynn_Llama-3-Soliloquy-8B-v2
  - source_model: Sao10K_L3-8B-Stheno-v3.1

Models used

Difference(from ChaoticSoliloquy v1.5)

Vision

llama3_mmproj

image/png

Prompt format: Llama 3

Downloads last month
20
Safetensors
Model size
24.9B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for xxx777xxxASD/L3_SnowStorm_4x8B

Quantizations
2 models

Collection including xxx777xxxASD/L3_SnowStorm_4x8B