Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,17 @@ tags:
|
|
6 |
model-index:
|
7 |
- name: out
|
8 |
results: []
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
---
|
10 |
|
11 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
@@ -28,6 +39,19 @@ The base model has 8k context, and the full-weight fine-tuning was with 4k seque
|
|
28 |
|
29 |
It took 2.5 days on 8x L40S provided by Crusoe Cloud
|
30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
31 |
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
|
32 |
|
33 |
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
|
@@ -199,4 +223,4 @@ The following hyperparameters were used during training:
|
|
199 |
- Transformers 4.40.0
|
200 |
- Pytorch 2.2.2+cu121
|
201 |
- Datasets 2.18.0
|
202 |
-
- Tokenizers 0.19.1
|
|
|
6 |
model-index:
|
7 |
- name: out
|
8 |
results: []
|
9 |
+
datasets:
|
10 |
+
- cognitivecomputations/Dolphin-2.9
|
11 |
+
- teknium/OpenHermes-2.5
|
12 |
+
- m-a-p/CodeFeedback-Filtered-Instruction
|
13 |
+
- cognitivecomputations/dolphin-coder
|
14 |
+
- cognitivecomputations/samantha-data
|
15 |
+
- HuggingFaceH4/ultrachat_200k
|
16 |
+
- microsoft/orca-math-word-problems-200k
|
17 |
+
- abacusai/SystemChat-1.1
|
18 |
+
- Locutusque/function-calling-chatml
|
19 |
+
- internlm/Agent-FLAN
|
20 |
---
|
21 |
|
22 |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
|
39 |
|
40 |
It took 2.5 days on 8x L40S provided by Crusoe Cloud
|
41 |
|
42 |
+
This model was trained FFT on all parameters, using ChatML prompt template format.
|
43 |
+
|
44 |
+
example:
|
45 |
+
|
46 |
+
```
|
47 |
+
<|im_start|>system
|
48 |
+
You are Dolphin, a helpful AI assistant.<|im_end|>
|
49 |
+
<|im_start|>user
|
50 |
+
{prompt}<|im_end|>
|
51 |
+
<|im_start|>assistant
|
52 |
+
|
53 |
+
```
|
54 |
+
|
55 |
Dolphin-2.9 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.
|
56 |
|
57 |
Dolphin is uncensored. I have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.
|
|
|
223 |
- Transformers 4.40.0
|
224 |
- Pytorch 2.2.2+cu121
|
225 |
- Datasets 2.18.0
|
226 |
+
- Tokenizers 0.19.1
|