ajibawa-2023
commited on
Commit
•
cf9a561
1
Parent(s):
81316a4
Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,7 @@ Entire dataset was trained on Azure 4 x A100 80GB. For 3 epoch, training took 42
|
|
20 |
|
21 |
This is a full fine tuned model. Links for quantized models are given below.
|
22 |
|
|
|
23 |
**GPTQ GGML & AWQ**
|
24 |
|
25 |
GPTQ: [Link](https://huggingface.co/TheBloke/Python-Code-33B-GPTQ)
|
|
|
20 |
|
21 |
This is a full fine tuned model. Links for quantized models are given below.
|
22 |
|
23 |
+
|
24 |
**GPTQ GGML & AWQ**
|
25 |
|
26 |
GPTQ: [Link](https://huggingface.co/TheBloke/Python-Code-33B-GPTQ)
|