Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
Qwen
/
Qwen2.5-1.5B-Instruct-GPTQ-Int4
like
0
Follow
Qwen
3,260
Text Generation
Transformers
Safetensors
English
qwen2
chat
conversational
text-generation-inference
Inference Endpoints
4-bit precision
gptq
arxiv:
2407.10671
License:
apache-2.0
Model card
Files
Files and versions
Community
2
Train
Deploy
Use this model
Why does this model take up more memory than the 17B one
#2
by
hhgz
- opened
Oct 12
Discussion
hhgz
Oct 12
Why does this model take up more memory than the 17B one
Edit
Preview
Upload images, audio, and videos by dragging in the text input, pasting, or
clicking here
.
Tap or paste here to upload images
Comment
·
Sign up
or
log in
to comment