Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,44 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
+
pipeline_tag: text-generation
|
4 |
+
tags:
|
5 |
+
- safetensors
|
6 |
+
- mergekit
|
7 |
+
- merge
|
8 |
+
- mistral
|
9 |
+
- not-for-all-audiences
|
10 |
+
- nsfw
|
11 |
+
- rp
|
12 |
+
- roleplay
|
13 |
+
language:
|
14 |
+
- en
|
15 |
---
|
16 |
+
# This model is recommended for RP, but you can use it as assistant as well. Please, give it a try. This version has lees GPTism.
|
17 |
+
---
|
18 |
+
### Prompt Format:
|
19 |
+
- **Extended Alpaca Format** As for exemple from [lemonilia/LimaRP-Mistral-7B-v0.1](https://huggingface.co/lemonilia/LimaRP-Mistral-7B-v0.1).
|
20 |
+
Use *### Response: (length = huge)* for exemple, to increase length.
|
21 |
+
|
22 |
+
### Configuration
|
23 |
+
|
24 |
+
The following YAML configuration was used to produce this model:
|
25 |
+
|
26 |
+
```yaml
|
27 |
+
models:
|
28 |
+
- model: .\Endevor_EndlessRP_v1
|
29 |
+
- model: kubernetes-bad/good-robot+.\toxic-dpo-v0.1-NoWarning-lora # This removes most of GPTism. The Toxic DPO is a lora I finetuned myself.
|
30 |
+
parameters:
|
31 |
+
weight: 0.6
|
32 |
+
density: 0.53
|
33 |
+
- model: rwitz/go-bruins-v2+Undi95/Mistral-7B-smoll_pippa-lora # Maintein RP stability.
|
34 |
+
parameters:
|
35 |
+
weight: 0.4
|
36 |
+
density: 0.53
|
37 |
+
merge_method: dare_ties
|
38 |
+
base_model: .\Endevor_EndlessRP_v1
|
39 |
+
parameters:
|
40 |
+
normalize: false
|
41 |
+
int8_mask: true
|
42 |
+
dtype: bfloat16
|
43 |
+
```
|
44 |
+
As this mostly focuses on RP, please don't expect it being smart with riddles or logical tests.
|