Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Joseph717171
/
Gemma-2-2b-it-OQ8_0-F32.EF32.IQ4_K_M-8_0-GGUF
like
1
GGUF
Inference Endpoints
imatrix
conversational
Model card
Files
Files and versions
Community
Deploy
Use this model
46874c4
Gemma-2-2b-it-OQ8_0-F32.EF32.IQ4_K_M-8_0-GGUF
1 contributor
History:
16 commits
Joseph717171
Update README.md
46874c4
verified
about 2 months ago
.gitattributes
1.95 kB
Upload gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf with huggingface_hub
about 2 months ago
README.md
263 Bytes
Update README.md
about 2 months ago
gemma-2-2B-it-OF32.EF32.IQ8_0.gguf
4.52 GB
LFS
Upload gemma-2-2B-it-OF32.EF32.IQ8_0.gguf with huggingface_hub
about 2 months ago
gemma-2-2B-it-OQ8_0.EF32.IQ8_0.gguf
2.78 GB
LFS
Rename gemma-2-2B-it-OF32.EF32.IQ8_0.gguf to gemma-2-2B-it-OQ8_0.EF32.IQ8_0.gguf
about 2 months ago
gemma-2-2b-it-OF32.EF32.IQ4_K_M.gguf
3.58 GB
LFS
Upload gemma-2-2b-it-OF32.EF32.IQ4_K_M.gguf with huggingface_hub
about 2 months ago
gemma-2-2b-it-OF32.EF32.IQ6_K.gguf
4.03 GB
LFS
Rename gemma-2-2b-it-OF32.EF32.IQ6_k.gguf to gemma-2-2b-it-OF32.EF32.IQ6_K.gguf
about 2 months ago
gemma-2-2b-it-OQ8_0.EF32.IQ4_K_M.gguf
1.85 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ4_K_M.gguf with huggingface_hub
about 2 months ago
gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf
2.29 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ6_k.gguf with huggingface_hub
about 2 months ago
gemma-2-2b-it-OQ8_0.EF32.IQ8_0.gguf
2.78 GB
LFS
Upload gemma-2-2b-it-OQ8_0.EF32.IQ8_0.gguf with huggingface_hub
about 2 months ago