Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
bluuwhale
/
Jellywibble-lora_120k_pref_data_ep2-GGUF
like
0
GGUF
llama
Inference Endpoints
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
Edit model card
GGUF Quant of Jellywibble/lora_120k_pref_data_ep2
GGUF Quant of
Jellywibble/lora_120k_pref_data_ep2
Both static and Imat quant.
Downloads last month
147
GGUF
Model size
8.03B params
Architecture
llama
4-bit
Q4_K_M
Q4_K_M
5-bit
Q5_K_M
Q5_K_M
6-bit
Q6_K
Q6_K
8-bit
Q8_0
Q8_0
16-bit
F16
Inference API
Unable to determine this model's library. Check the
docs
.
Collection including
bluuwhale/Jellywibble-lora_120k_pref_data_ep2-GGUF
GGUF Quantize Model 🖥️
Collection
GGUF Model Quantize Weight
•
5 items
•
Updated
Aug 5