Update README.md
Browse files
README.md
CHANGED
@@ -31,7 +31,6 @@ Resources and Technical Documentation:
|
|
31 |
|
32 |
Phi-3 is available in several formats, catering to different computational needs:
|
33 |
|
34 |
-
- **ggml-model-q4_0.gguf**: 4-bit quantization, offering a compact size of 2.1 GB for efficient inference.
|
35 |
- **ggml-model-q8_0.gguf**: 8-bit quantization, providing robust performance with a file size of 3.8 GB.
|
36 |
- **ggml-model-f16.gguf**: Standard 16-bit floating-point format, with a larger file size of 7.2 GB for enhanced precision.
|
37 |
|
|
|
31 |
|
32 |
Phi-3 is available in several formats, catering to different computational needs:
|
33 |
|
|
|
34 |
- **ggml-model-q8_0.gguf**: 8-bit quantization, providing robust performance with a file size of 3.8 GB.
|
35 |
- **ggml-model-f16.gguf**: Standard 16-bit floating-point format, with a larger file size of 7.2 GB for enhanced precision.
|
36 |
|