RichardErkhov commited on
Commit
a0737e4
1 Parent(s): e59a478

uploaded readme

Browse files
Files changed (1) hide show
  1. README.md +176 -0
README.md ADDED
@@ -0,0 +1,176 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Quantization made by Richard Erkhov.
2
+
3
+ [Github](https://github.com/RichardErkhov)
4
+
5
+ [Discord](https://discord.gg/pvy7H8DZMG)
6
+
7
+ [Request more models](https://github.com/RichardErkhov/quant_request)
8
+
9
+
10
+ Meltemi-7B-Instruct-v1.5 - GGUF
11
+ - Model creator: https://huggingface.co/ilsp/
12
+ - Original model: https://huggingface.co/ilsp/Meltemi-7B-Instruct-v1.5/
13
+
14
+
15
+ | Name | Quant method | Size |
16
+ | ---- | ---- | ---- |
17
+ | [Meltemi-7B-Instruct-v1.5.Q2_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q2_K.gguf) | Q2_K | 2.66GB |
18
+ | [Meltemi-7B-Instruct-v1.5.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.IQ3_XS.gguf) | IQ3_XS | 2.95GB |
19
+ | [Meltemi-7B-Instruct-v1.5.IQ3_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.IQ3_S.gguf) | IQ3_S | 3.1GB |
20
+ | [Meltemi-7B-Instruct-v1.5.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q3_K_S.gguf) | Q3_K_S | 0.98GB |
21
+ | [Meltemi-7B-Instruct-v1.5.IQ3_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.IQ3_M.gguf) | IQ3_M | 3.2GB |
22
+ | [Meltemi-7B-Instruct-v1.5.Q3_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q3_K.gguf) | Q3_K | 3.42GB |
23
+ | [Meltemi-7B-Instruct-v1.5.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q3_K_M.gguf) | Q3_K_M | 3.42GB |
24
+ | [Meltemi-7B-Instruct-v1.5.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q3_K_L.gguf) | Q3_K_L | 3.7GB |
25
+ | [Meltemi-7B-Instruct-v1.5.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.IQ4_XS.gguf) | IQ4_XS | 3.58GB |
26
+ | [Meltemi-7B-Instruct-v1.5.Q4_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q4_0.gguf) | Q4_0 | 3.98GB |
27
+ | [Meltemi-7B-Instruct-v1.5.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.IQ4_NL.gguf) | IQ4_NL | 0.25GB |
28
+ | [Meltemi-7B-Instruct-v1.5.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q4_K_S.gguf) | Q4_K_S | 0.0GB |
29
+ | [Meltemi-7B-Instruct-v1.5.Q4_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q4_K.gguf) | Q4_K | 0.0GB |
30
+ | [Meltemi-7B-Instruct-v1.5.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q4_K_M.gguf) | Q4_K_M | 4.22GB |
31
+ | [Meltemi-7B-Instruct-v1.5.Q4_1.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q4_1.gguf) | Q4_1 | 4.4GB |
32
+ | [Meltemi-7B-Instruct-v1.5.Q5_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q5_0.gguf) | Q5_0 | 4.82GB |
33
+ | [Meltemi-7B-Instruct-v1.5.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q5_K_S.gguf) | Q5_K_S | 0.87GB |
34
+ | [Meltemi-7B-Instruct-v1.5.Q5_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q5_K.gguf) | Q5_K | 0.68GB |
35
+ | [Meltemi-7B-Instruct-v1.5.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q5_K_M.gguf) | Q5_K_M | 4.95GB |
36
+ | [Meltemi-7B-Instruct-v1.5.Q5_1.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q5_1.gguf) | Q5_1 | 5.25GB |
37
+ | [Meltemi-7B-Instruct-v1.5.Q6_K.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q6_K.gguf) | Q6_K | 2.12GB |
38
+ | [Meltemi-7B-Instruct-v1.5.Q8_0.gguf](https://huggingface.co/RichardErkhov/ilsp_-_Meltemi-7B-Instruct-v1.5-gguf/blob/main/Meltemi-7B-Instruct-v1.5.Q8_0.gguf) | Q8_0 | 0.54GB |
39
+
40
+
41
+
42
+
43
+ Original model description:
44
+ ---
45
+ language:
46
+ - el
47
+ - en
48
+ license: apache-2.0
49
+ pipeline_tag: text-generation
50
+ tags:
51
+ - finetuned
52
+ inference: true
53
+ ---
54
+
55
+ # Meltemi Instruct Large Language Model for the Greek language
56
+
57
+ We present Meltemi 7B Instruct v1.5 Large Language Model (LLM), a new and improved instruction fine-tuned version of [Meltemi 7B v1.5](https://huggingface.co/ilsp/Meltemi-7B-v1.5).
58
+
59
+ ![image/png](https://miro.medium.com/v2/resize:fit:720/format:webp/1*IaE7RJk6JffW8og-MOnYCA.png)
60
+
61
+ # Model Information
62
+
63
+ - Vocabulary extension of the Mistral 7b tokenizer with Greek tokens for lower costs and faster inference (**1.52** vs. 6.80 tokens/word for Greek)
64
+ - 8192 context length
65
+ - Fine-tuning has been done with the [Odds Ratio Preference Optimization (ORPO)](https://arxiv.org/abs/2403.07691) algorithm using 97k preference data:
66
+ * 89,730 Greek preference data which are mostly translated versions of high-quality datasets on Hugging Face
67
+ * 7,342 English preference data
68
+ - Our alignment procedure is based on the [TRL - Transformer Reinforcement Learning](https://huggingface.co/docs/trl/index) library and partially on the [Hugging Face finetuning recipes](https://github.com/huggingface/alignment-handbook)
69
+
70
+
71
+ # Instruction format
72
+ The prompt format is the same as the [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta) format and can be
73
+ utilized through the tokenizer's [chat template](https://huggingface.co/docs/transformers/main/chat_templating) functionality as follows:
74
+
75
+ ```python
76
+ from transformers import AutoModelForCausalLM, AutoTokenizer
77
+
78
+ device = "cuda" # the device to load the model onto
79
+
80
+ model = AutoModelForCausalLM.from_pretrained("ilsp/Meltemi-7B-Instruct-v1.5")
81
+ tokenizer = AutoTokenizer.from_pretrained("ilsp/Meltemi-7B-Instruct-v1.5")
82
+
83
+ model.to(device)
84
+
85
+ messages = [
86
+ {"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."},
87
+ {"role": "user", "content": "Πες μου αν έχεις συνείδηση."},
88
+ ]
89
+
90
+ # Through the default chat template this translates to
91
+ #
92
+ # <|system|>
93
+ # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
94
+ # <|user|>
95
+ # Πες μου αν έχεις συνείδηση.</s>
96
+ # <|assistant|>
97
+ #
98
+
99
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
100
+ input_prompt = tokenizer(prompt, return_tensors='pt').to(device)
101
+ outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True)
102
+
103
+ print(tokenizer.batch_decode(outputs)[0])
104
+ # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.
105
+
106
+ messages.extend([
107
+ {"role": "assistant", "content": tokenizer.batch_decode(outputs)[0]},
108
+ {"role": "user", "content": "Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;"}
109
+ ])
110
+
111
+ # Through the default chat template this translates to
112
+ #
113
+ # <|system|>
114
+ # Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
115
+ # <|user|>
116
+ # Πες μου αν έχεις συνείδηση.</s>
117
+ # <|assistant|>
118
+ # Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.</s>
119
+ # <|user|>
120
+ # Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;</s>
121
+ # <|assistant|>
122
+ #
123
+
124
+ prompt = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
125
+ input_prompt = tokenizer(prompt, return_tensors='pt').to(device)
126
+ outputs = model.generate(input_prompt['input_ids'], max_new_tokens=256, do_sample=True)
127
+
128
+ print(tokenizer.batch_decode(outputs)[0])
129
+ ```
130
+
131
+ Please make sure that the BOS token is always included in the tokenized prompts. This might not be the default setting in all evaluation or fine-tuning frameworks.
132
+
133
+ # Evaluation
134
+
135
+ The evaluation suite we created includes 6 test sets and has been implemented based on a [fork](https://github.com/LeonVouk/lighteval) of the [lighteval](https://github.com/huggingface/lighteval) framework.
136
+
137
+ Our evaluation suite includes:
138
+ * Four machine-translated versions ([ARC Greek](https://huggingface.co/datasets/ilsp/arc_greek), [Truthful QA Greek](https://huggingface.co/datasets/ilsp/truthful_qa_greek), [HellaSwag Greek](https://huggingface.co/datasets/ilsp/hellaswag_greek), [MMLU Greek](https://huggingface.co/datasets/ilsp/mmlu_greek)) of established English benchmarks for language understanding and reasoning ([ARC Challenge](https://arxiv.org/abs/1803.05457), [Truthful QA](https://arxiv.org/abs/2109.07958), [Hellaswag](https://arxiv.org/abs/1905.07830), [MMLU](https://arxiv.org/abs/2009.03300)).
139
+ * An existing benchmark for question answering in Greek ([Belebele](https://arxiv.org/abs/2308.16884))
140
+ * A novel benchmark created by the ILSP team for medical question answering based on the medical exams of [DOATAP](https://www.doatap.gr) ([Medical MCQA](https://huggingface.co/datasets/ilsp/medical_mcqa_greek)).
141
+
142
+ Our evaluation is performed in a few-shot setting, consistent with the settings in the [Open LLM leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
143
+
144
+ We can see that our new training and fine-tuning procedure for Meltemi 7B Instruct v1.5 enhances performance across all Greek test sets by a **+7.8%** average improvement compared to the earlier Meltemi Instruct 7B v1 model. The results for the Greek test sets are shown in the following table:
145
+
146
+ | | Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | **Average** |
147
+ |----------------|----------------|-------------|--------------|------------------|-------------------|---------|---------|
148
+ | Mistral 7B | 29.8% | 45.0% | 36.5% | 27.1% | 45.8% | 35% | **36.5%** |
149
+ | Meltemi 7B Instruct v1 | 36.1% | 56.0% | 59.0% | 44.4% | 51.1% | 34.1% | **46.8%** |
150
+ | Meltemi 7B Instruct v1.5 | 48.0% | 75.5% | 63.7% | 40.8% | 53.8% | 45.9% | **54.6%** |
151
+
152
+
153
+ # Ethical Considerations
154
+
155
+ This model has been aligned with human preferences, but might generate misleading, harmful, and toxic content.
156
+
157
+
158
+ # Acknowledgements
159
+
160
+ The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the [OCRE Cloud framework](https://www.ocre-project.eu/), providing Amazon Web Services for the Greek Academic and Research Community.
161
+
162
+
163
+ # Citation
164
+
165
+ ```
166
+ @misc{voukoutis2024meltemiopenlargelanguage,
167
+ title={Meltemi: The first open Large Language Model for Greek},
168
+ author={Leon Voukoutis and Dimitris Roussis and Georgios Paraskevopoulos and Sokratis Sofianopoulos and Prokopis Prokopidis and Vassilis Papavasileiou and Athanasios Katsamanis and Stelios Piperidis and Vassilis Katsouros},
169
+ year={2024},
170
+ eprint={2407.20743},
171
+ archivePrefix={arXiv},
172
+ primaryClass={cs.CL},
173
+ url={https://arxiv.org/abs/2407.20743},
174
+ }
175
+ ```
176
+