Edit model card

New and improved version here:
Prefer the new version 0.72 here!

My upload speeds have been cooked and unstable lately.
Realistically I'd need to move to get a better provider.
If you want and you are able to...
You can support my various endeavors here (Ko-fi).
I apologize for disrupting your experience.

"It keeps getting better!"

"One of the top recent performers in the Chaiverse Leaderboard!"

GGUF-IQ-Imatrix quants for ChaoticNeutrals/Poppy_Porpoise-v0.7-L3-8B.

Updated! These quants have been redone with the fixes from llama.cpp/pull/6920 in mind.
Use KoboldCpp version 1.64 or higher.

Compatible SillyTavern presets here (recommended/simple)) or here (Virt's).
Use the latest version of KoboldCpp. Use the provided presets.
This is all still highly experimental, let the authors know how it performs for you, feedback is more important than ever now.

For 8GB VRAM GPUs, I recommend the Q4_K_M-imat quant for up to 12288 context sizes.

Original model information:

image/png

Update: Vision/multimodal capabilities again!

If you want to use vision functionality:

  • You must use the latest versions of Koboldcpp.

To use the multimodal capabilities of this model and use vision you need to load the specified mmproj file, this can be found inside this model repo. https://huggingface.co/ChaoticNeutrals/Llava_1.5_Llama3_mmproj

  • You can load the mmproj by using the corresponding section in the interface:

image/png

Downloads last month
559
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Collection including Lewdiculous/Poppy_Porpoise-v0.7-L3-8B-GGUF-IQ-Imatrix