8bpw exl2 quantization of MarsupialAI/Monstral-123B
This model is a slerp merge of Behemoth and Magnum V4. The intention was to moisten up Behemoth a bit and give it some of that Claude flavor, but without being nearly as thirsty as Magnum. I feel it succeeds in both areas.
Mergefuel:
- TheDrummer/Behemoth-123B-v1
- anthracite-org/magnum-v4-123b
This model is uncensored and perfectly capable of generating objectionable material. It is far less likely to return NSFW content
for SFW prompts than Magnum V4, but you should still exercise caution. As with any LLM, no factual claims
made by the model should be taken at face value. You know that boilerplate safety disclaimer that most professional models have?
Assume this has it too. This model is for entertainment purposes only.
Original: https://huggingface.co/MarsupialAI/Monstral-123B
GGUFs: https://huggingface.co/MarsupialAI/Monstral-123B_iMat_GGUF
EXL2: https://huggingface.co/MarsupialAI/Monstral-123B_4.0bpw_EXL2
Prompt Format
Mistral or Metharme
- Downloads last month
- 55
Model tree for Liedichi/Monstral-123B_8.0bpw_EXL2
Base model
MarsupialAI/Monstral-123B