Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
Lyte
/
RWKV-6-World-1.6B-GGUF
like
1
Text Generation
GGUF
rwkv
rwkv-6
Inference Endpoints
License:
apache-2.0
Model card
Files
Files and versions
Community
1
Deploy
Use this model
cdfbd47
RWKV-6-World-1.6B-GGUF
1 contributor
History:
21 commits
Lyte
Update README.md
cdfbd47
verified
about 2 months ago
.gitattributes
2 kB
Upload RWKV-6-World-1.6B-GGUF-F16.gguf with huggingface_hub
about 2 months ago
README.md
1.34 kB
Update README.md
about 2 months ago
RWKV-6-World-1.6B-GGUF-F16.gguf
3.25 GB
LFS
Upload RWKV-6-World-1.6B-GGUF-F16.gguf with huggingface_hub
about 2 months ago
RWKV-6-World-1.6B-GGUF-Q2_K.gguf
676 MB
LFS
Upload RWKV-6-World-1.6B-GGUF-Q2_K.gguf with huggingface_hub
about 2 months ago
RWKV-6-World-1.6B-GGUF-Q3_K.gguf
823 MB
LFS
Upload RWKV-6-World-1.6B-GGUF-Q3_K.gguf with huggingface_hub
about 2 months ago
RWKV-6-World-1.6B-GGUF-Q4_K_M.gguf
1.01 GB
LFS
Upload RWKV-6-World-1.6B-GGUF-Q4_K_M.gguf with huggingface_hub
about 2 months ago
RWKV-6-World-1.6B-GGUF-Q5_K.gguf
1.19 GB
LFS
Upload RWKV-6-World-1.6B-GGUF-Q5_K.gguf with huggingface_hub
about 2 months ago
RWKV-6-World-1.6B-GGUF-Q6_K.gguf
1.39 GB
LFS
Upload RWKV-6-World-1.6B-GGUF-Q6_K.gguf with huggingface_hub
about 2 months ago
RWKV-6-World-1.6B-GGUF-Q8_0.gguf
1.77 GB
LFS
Upload RWKV-6-World-1.6B-GGUF-Q8_0.gguf with huggingface_hub
about 2 months ago
convert-model-to-gguf.ipynb
179 kB
Upload convert-model-to-gguf.ipynb
about 2 months ago