Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Joseph717171
/
Gemma-2-2b-it-OQ8_0-F32.EF32.IQ4_K_M-8_0-GGUF
like
1
GGUF
imatrix
Model card
Files
Files and versions
Community
Use this model
022149e
Gemma-2-2b-it-OQ8_0-F32.EF32.IQ4_K_M-8_0-GGUF
1 contributor
History:
11 commits
Joseph717171
Upload gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf with huggingface_hub
022149e
verified
5 days ago
.gitattributes
1.95 kB
Upload gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf with huggingface_hub
5 days ago
README.md
195 Bytes
Create README.md
5 days ago
gemma-2-2B-it-OF32.EF32.IQ8_0.gguf
4.52 GB
LFS
Upload gemma-2-2B-it-OF32.EF32.IQ8_0.gguf with huggingface_hub
5 days ago
gemma-2-2B-it-OQ8_0.EF32.IQ8_0.gguf
2.78 GB
LFS
Rename gemma-2-2B-it-OF32.EF32.IQ8_0.gguf to gemma-2-2B-it-OQ8_0.EF32.IQ8_0.gguf
5 days ago
gemma-2-2b-it-OF32.EF32.IQ4_K_M.gguf
3.58 GB
LFS
Upload gemma-2-2b-it-OF32.EF32.IQ4_K_M.gguf with huggingface_hub
5 days ago
gemma-2-2b-it-OF32.EF32.IQ6_k.gguf
4.03 GB
LFS
Upload gemma-2-2b-it-OF32.EF32.IQ6_k.gguf with huggingface_hub
5 days ago
gemma-2-2b-it-OQ8_0.EF32.IQ4_K_M.gguf
1.85 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ4_K_M.gguf with huggingface_hub
5 days ago
gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf
2.29 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf with huggingface_hub
5 days ago