laurentiubp commited on
Commit
9df5920
1 Parent(s): 57433b7

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3
3
+ base_model:
4
+ - catallama/CataLlama-v0.2-Instruct-SFT
5
+ - catallama/CataLlama-v0.2-Instruct-DPO
6
+ tags:
7
+ - llama
8
+ - llama-3
9
+ - catalan
10
+ model-index:
11
+ - name: CataLlama-v0.2-Instruct-SFT-DPO-Merged
12
+ results: []
13
+ datasets:
14
+ - catallama/Catalan-DPO-V2
15
+ - catallama/Catalan-Instruct-V2
16
+ language:
17
+ - ca
18
+ - en
19
+ pipeline_tag: text-generation
20
+ ---
21
+
22
+ ![](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-SFT/resolve/main/CataLlama-v0.2.png)
23
+
24
+ # CataLlama-v0.2-Instruct-SFT-DPO-Merged-GGUF
25
+
26
+ **CataLlama-v0.2-Instruct-SFT-DPO-Merged** is a merge between [catallama/CataLlama-v0.2-Instruct-SFT](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-SFT) and [catallama/CataLlama-v0.2-Instruct-DPO](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-DPO)
27
+
28
+ The resulting model scores better than it's parents on both MMLU and GSM8K.
29
+
30
+ **This is an instruction fine-tuned model, optimised with DPO, proficient on the following tasks in Catalan**
31
+
32
+ - *Information extraction (suitable for RAG)*
33
+ - *Named Entity Recognition (NER)*
34
+ - *Translation from English to Catalan and Catalan to English*
35
+ - *Summarization - both short form and long form*
36
+ - *Sentiment analysis*
37
+ - *Chat*
38
+
39
+ **Model developers** [Laurentiu Petrea](https://www.linkedin.com/in/laurentiupetrea/) based on Llama-3 from Meta.
40
+
41
+ **Model Architecture** CataLlama is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and direct preference optimisation (DPO) to align with human preferences for helpfulness and safety.
42
+
43
+ **License** The model uses the llama-3 license available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)
44
+
45
+
46
+ ## Benchmarks
47
+
48
+ | Model | CataLlama-v0.2-Instruct-DPO | CataLlama-v0.2-Instruct-SFT | CataLlama-v0.2-Instruct-SFT-DPO-Merged |
49
+ | ------------------ | --------------------------- | ------------------------------- | ------------------------------------------ |
50
+ | MMLU 5 shot | 58.89 | 59.35 | **60.53** |
51
+ | GSM8K CoT 8 shot | 60.05 | 76.04 | **77.26** |
52
+
53
+
54
+ ## Merging procedure
55
+
56
+ The merge was performed between the 32 layers of the two models, excluding the embedding, norm and the head layers.
57
+
58
+ The weights of the 32 layers were merged in equal proportion simply by calculating the average of the corresponding weights from the parent models.
59
+
60
+ The embedding, norm and head layers are copied from CataLlama-v0.2-Instruct-DPO without modification.
61
+
62
+ **This was done with a custom script, without mergekit.**
63
+
64
+
65
+ ## Intended Use
66
+
67
+ **Note:** This model is not intended to beat benchmarks, but to demonstrate techniques for augmenting LLMs on new languages and preserve rare languages as part of our world heritage.
68
+
69
+ **Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.
70
+
71
+ **Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.
72
+
73
+ **Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.