Update README.md
Browse files
README.md
CHANGED
@@ -3,4 +3,38 @@ license: apache-2.0
|
|
3 |
datasets:
|
4 |
- databricks/databricks-dolly-15k
|
5 |
pipeline_tag: text-generation
|
6 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
datasets:
|
4 |
- databricks/databricks-dolly-15k
|
5 |
pipeline_tag: text-generation
|
6 |
+
---
|
7 |
+
# Instruct_Mixtral-8x7B-v0.1_Dolly15K
|
8 |
+
Fine-tuned from Mixtral-8x7B-v0.1, used Dolly15k for the dataset. 85% for training, 14.9% validation, 0.1% test. Trained for 1.0 epochs using QLora. Trained with 1024 context window.
|
9 |
+
|
10 |
+
# Model Details
|
11 |
+
* **Trained by**: trained by [Brillibits](https://www.youtube.com/@Brillibits).
|
12 |
+
* **Model type:** **Instruct_Mixtral-8x7B-v0.1_Dolly15K** is an auto-regressive language model based on the Llama 2 transformer architecture.
|
13 |
+
* **Language(s)**: English
|
14 |
+
* **License for Instruct_Mixtral-8x7B-v0.1_Dolly15K**: apache-2.0 license
|
15 |
+
|
16 |
+
|
17 |
+
# Prompting
|
18 |
+
|
19 |
+
## Prompt Template With Context
|
20 |
+
|
21 |
+
```
|
22 |
+
Write a 10-line poem about a given topic
|
23 |
+
|
24 |
+
Input:
|
25 |
+
|
26 |
+
The topic is about racecars
|
27 |
+
|
28 |
+
Output:
|
29 |
+
```
|
30 |
+
## Prompt Template Without Context
|
31 |
+
```
|
32 |
+
Who was the was the second president of the United States?
|
33 |
+
|
34 |
+
Output:
|
35 |
+
```
|
36 |
+
|
37 |
+
## Professional Assistance
|
38 |
+
This model and other models like it are great, but where LLMs hold the most promise is when they are applied on custom data to automate a wide variety of tasks
|
39 |
+
|
40 |
+
If you have a dataset and want to see if you might be able to apply that data to automate some tasks, and you are looking for professional assistance, contact me [here](mailto:[email protected])
|