shimmyshimmer
commited on
Commit
•
9858419
1
Parent(s):
76e5f5d
Update README.md
Browse files
README.md
CHANGED
@@ -15,7 +15,7 @@ tags:
|
|
15 |
|
16 |
# Finetune Phi-3, Llama 3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
|
17 |
|
18 |
-
Directly quantized 4bit model with `bitsandbytes`.
|
19 |
|
20 |
We have a Google Colab Tesla T4 notebook for **Phi-3 (mini)** here: https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing
|
21 |
And another notebook for **Phi-3 (medium)** here: https://colab.research.google.com/drive/1hhdhBa1j_hsymiW9m-WzxQtgqTH_NHqi?usp=sharing
|
|
|
15 |
|
16 |
# Finetune Phi-3, Llama 3, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
|
17 |
|
18 |
+
Directly quantized 4bit model with `bitsandbytes`. We Mistralfied the model to ensure it could be used on many platforms
|
19 |
|
20 |
We have a Google Colab Tesla T4 notebook for **Phi-3 (mini)** here: https://colab.research.google.com/drive/1vIrqH5uYDQwsJ4-OO3DErvuv4pBgVwk4?usp=sharing
|
21 |
And another notebook for **Phi-3 (medium)** here: https://colab.research.google.com/drive/1hhdhBa1j_hsymiW9m-WzxQtgqTH_NHqi?usp=sharing
|