update readme
Browse files- README.md +87 -0
- em_model_logo_web.jpeg +0 -0
README.md
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: llama2
|
3 |
+
language:
|
4 |
+
- de
|
5 |
+
library_name: transformers
|
6 |
+
pipeline_tag: text-generation
|
7 |
+
inference: false
|
8 |
+
model_creator: jphme
|
9 |
+
model_name: EM German
|
10 |
+
model_type: llama
|
11 |
+
prompt_template: >
|
12 |
+
Du bist ein hilfreicher KI Assistent, der den Anweisungen des Nutzers sehr gut folgt und ausführliche Antworten gibt! USER: Was ist 1+1? ASSISTANT:
|
13 |
+
tags:
|
14 |
+
- facebook
|
15 |
+
- meta
|
16 |
+
- pytorch
|
17 |
+
- llama
|
18 |
+
- llama-2
|
19 |
+
- german
|
20 |
+
- deutsch
|
21 |
+
---
|
22 |
+
![EM Logo](em_model_logo_web.jpeg)
|
23 |
+
|
24 |
+
**EM German (v01)** is a Llama2/Mistral/LeoLM-based model family, finetuned on a large dataset of various instructions in German language. The models are optimized for German text, providing proficiency in understanding, generating, and interacting with German language content.
|
25 |
+
|
26 |
+
Please find all Informations, Example Outputs, RAG prompt format and eval results for the EM German Model family in [our Github Repository](https://github.com/jphme/EM_German).
|
27 |
+
|
28 |
+
|
29 |
+
(Für weitere Informationen und Anleitungen auf Deutsch, besuchen Sie bitte [unser Github Repository](https://github.com/jphme/EM_German/blob/main/README_DE.md).)
|
30 |
+
|
31 |
+
|
32 |
+
# Links & Demos
|
33 |
+
|
34 |
+
## Model Links
|
35 |
+
|
36 |
+
| Base Model | HF | GPTQ | GGUF | AWQ |
|
37 |
+
|-------|-------|-------|-------|-------|
|
38 |
+
| Llama2 7b | [Link](https://huggingface.co/jphme/em_german_7b_v01) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_7b_v01-AWQ) |
|
39 |
+
| Llama2 13b | [Link](https://huggingface.co/jphme/em_german_13b_v01) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_13b_v01-AWQ) |
|
40 |
+
| Llama2 70b | [Link](https://huggingface.co/jphme/em_german_70b_v01) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_70b_v01-AWQ) |
|
41 |
+
| [Mistral 7b](https://huggingface.co/mistralai/Mistral-7B-v0.1) | [Link](https://huggingface.co/jphme/em_german_mistral_v01) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GPTQ) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF) | [Link](https://huggingface.co/TheBloke/em_german_mistral_v01-AWQ) |
|
42 |
+
| [LeoLM 7b](https://huggingface.co/LeoLM/leo-hessianai-7b) | [Link](https://huggingface.co/jphme/em_german_7b_leo) | [Link](https://huggingface.co/jphme/em_german_7b_leo_gptq) | [Link](hhttps://huggingface.co/jphme/em_german_7b_leo_gguf) | tbc |
|
43 |
+
| LeoLM 13b | soon | soon | soon | tbc |
|
44 |
+
|
45 |
+
### Notes about the different versions:
|
46 |
+
For the 7b models, we recommend the use of the "LeoLM" variant if text output quality is important and the Mistral variant, if reasoning/understanding is the main priority. Both should give better results than the Llama-2 7b model and often even the Llama2 13b model.
|
47 |
+
|
48 |
+
If you get unsatisfying results with one or another EM German version, please try a different (and/or larger) model or version for your usecase.
|
49 |
+
|
50 |
+
|
51 |
+
## Demos:
|
52 |
+
|
53 |
+
You can use some of the models with **free** google Colab instances (e.g. the 7bn model in 8bit or the 13b model with GPTQ):
|
54 |
+
|
55 |
+
* [Example Colab Notebook for 13b with GPTQ](https://colab.research.google.com/drive/1IJfJdVwGkfe5MYOqHptystR3FBeEUdGn?usp=sharing)
|
56 |
+
* [Example Colab Notebook for 7b with 8bit-Loading](https://colab.research.google.com/drive/1bsv6vkLM4AlCpSyXA6ol9P32zxZmf7Zu?usp=sharing)
|
57 |
+
* For further information and GUI use, please visit [our Github Repository](https://github.com/jphme/EM_German).
|
58 |
+
|
59 |
+
|
60 |
+
# Prompt Format
|
61 |
+
|
62 |
+
This model follows the Vicuna format without linebreaks (but should work with linebreaks as well). The format is as follows:
|
63 |
+
|
64 |
+
```
|
65 |
+
Du bist ein hilfreicher Assistent. USER: <instruction> ASSISTANT:
|
66 |
+
```
|
67 |
+
|
68 |
+
You can swap the standard system prompt for a better suited one (see below for RAG-tasks).
|
69 |
+
|
70 |
+
|
71 |
+
|
72 |
+
# Acknowledgements:
|
73 |
+
|
74 |
+
Many thanks to [winglian/caseus](https://huggingface.co/winglian) for his great work on Axolotl which I used to train the EM mdoels. I am also grateful to [Jon Durbin](https://huggingface.co/jondurbin) and his [Airoboros](https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1) models and code from which I borrowed many ideas and code snippets.
|
75 |
+
Additionally many thanks to [Björn Plüster](https://huggingface.co/bjoernp) and the LeoLM team for the outstanding pretraining work on LeoLM and last but not least many many thanks to [TheBloke](https://huggingface.co/TheBloke) for the preparation of quantized versions in all formats under the sun.
|
76 |
+
The 70b model was trained with support of the [OVH Cloud Startup Program](https://startup.ovhcloud.com/en/).
|
77 |
+
|
78 |
+
# Contact
|
79 |
+
|
80 |
+
I you are interested in customized LLMs for business applications, please get in contact with me via [my website](https://www.jph.me). I am also always happy about suggestions and feedback.
|
81 |
+
|
82 |
+
*PS: We are also always interested in support for our startup ellamind, which will offer customized models for business applications in the future (currently still in stealth mode). Please get in touch if you are interested!*
|
83 |
+
|
84 |
+
# Disclaimer:
|
85 |
+
|
86 |
+
The license on this model does not constitute legal advice. I am not responsible for the actions of third parties who use this model.
|
87 |
+
This model should only be used for research purposes. The original Llama2 license applies and is distributed with the model files.
|
em_model_logo_web.jpeg
CHANGED