Update README.md
Browse files
README.md
CHANGED
@@ -7,17 +7,19 @@ tags:
|
|
7 |
## Lumimaid 0.2
|
8 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/WP3pcYWOUoCbxxg0SyWeH.png" alt="Image" style="display: block; margin-left: auto; margin-right: auto; width: 65%;">
|
9 |
<div style="text-align: center; font-size: 30px;">
|
10 |
-
<a href="https://
|
11 |
-
<a href="https://
|
12 |
-
<a href="https://
|
13 |
-
<a href="https://
|
14 |
</div>
|
15 |
|
16 |
### This model is based on: [Mistral-Large-Instruct](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407)
|
17 |
|
18 |
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
|
19 |
-
|
20 |
-
|
|
|
|
|
21 |
|
22 |
|
23 |
## Credits:
|
@@ -25,7 +27,7 @@ On top of that Lumimaid is now mostly opus 3 sonnet, opus and a small part sonne
|
|
25 |
- IkariDev
|
26 |
|
27 |
## Training data used:
|
28 |
-
|
29 |
|
30 |
# Prompt template: Mistral
|
31 |
|
|
|
7 |
## Lumimaid 0.2
|
8 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/630dfb008df86f1e5becadc3/WP3pcYWOUoCbxxg0SyWeH.png" alt="Image" style="display: block; margin-left: auto; margin-right: auto; width: 65%;">
|
9 |
<div style="text-align: center; font-size: 30px;">
|
10 |
+
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-8B">8b</a> -
|
11 |
+
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-12B">12b</a> -
|
12 |
+
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B">70b</a> -
|
13 |
+
<a href="https://huggingface.co/NeverSleep/Lumimaid-v0.2-123B">[123b]</a>
|
14 |
</div>
|
15 |
|
16 |
### This model is based on: [Mistral-Large-Instruct](https://huggingface.co/mistralai/Mistral-Large-Instruct-2407)
|
17 |
|
18 |
Lumimaid 0.1 -> 0.2 is a HUGE step up dataset wise.
|
19 |
+
|
20 |
+
As some people have told us our models are sloppy, Ikari decided to say fuck it and literally nuke all chats out with most slop.
|
21 |
+
|
22 |
+
Our dataset stayed the same since day one, we added data over time, cleaned them, and repeat. After not releasing model for a while because we were never satisfied, we think it's time to come back!
|
23 |
|
24 |
|
25 |
## Credits:
|
|
|
27 |
- IkariDev
|
28 |
|
29 |
## Training data used:
|
30 |
+
We will point out all dataset we used here, please be patient the time we get them all back kek.
|
31 |
|
32 |
# Prompt template: Mistral
|
33 |
|