Edit model card

EXL2 quants of Mistral-7B-instruct-v0.3

v0.3's vocabulary is compatible with Mistral-Large-123B, so this works as a draft model for Mistral-Large.

2.80 bits per weight
3.00 bits per weight
4.00 bits per weight
4.50 bits per weight
5.00 bits per weight
6.00 bits per weight

measurement.json

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .