skkjodhpur
commited on
Commit
•
e00d456
1
Parent(s):
71184f6
Update README.md
Browse files
README.md
CHANGED
@@ -11,6 +11,21 @@ tags:
|
|
11 |
- trl
|
12 |
---
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
# Uploaded model
|
15 |
|
16 |
- **Developed by:** skkjodhpur
|
|
|
11 |
- trl
|
12 |
---
|
13 |
|
14 |
+
# Mistral-Nemo-12b-Unsloth-2x-Faster-Finetuning
|
15 |
+
# Model Overview:
|
16 |
+
- **Developed by:** skkjodhpur
|
17 |
+
- **License:** Apache-2.0
|
18 |
+
- **Base Model:** unsloth/mistral-nemo-base-2407-bnb-4bit
|
19 |
+
- **Libraries Used:** Unsloth, Huggingface's TRL (Transformers Reinforcement Learning) library
|
20 |
+
|
21 |
+
**Model Description**
|
22 |
+
The Mistral-Nemo-12b model has been fine-tuned for text generation tasks. This fine-tuning was performed using the Unsloth optimization framework, which significantly accelerates the training process, achieving a 2x faster fine-tuning time compared to conventional methods. The model leverages the robust capabilities of Huggingface's TRL library, enhancing its performance in generating high-quality text.
|
23 |
+
|
24 |
+
**Features**
|
25 |
+
**Language:** English
|
26 |
+
**Capabilities:** Text generation, transformers-based inference
|
27 |
+
**Fine-tuning Details:** The fine-tuning process was focused on improving inference speed and maintaining or enhancing the quality of the generated text.
|
28 |
+
|
29 |
# Uploaded model
|
30 |
|
31 |
- **Developed by:** skkjodhpur
|