This repo includes a Version of Phi-3 that was quantized to AWQ using AutoAWQ. Currently hosting via the TGI docker image fails due to its fallback on AutoModel and that not being compatible with AWQ. Hosting on vLLM is recommended.
To run the model you need to set the trust-remote-code (or similar) flag. While the remote code comes from microsoft (see LICENSE information in the file) you should validate the code yourself before deployment.
- Downloads last month
- 1,655
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.