File size: 3,013 Bytes
9df5920
 
2d46a68
9df5920
 
 
 
 
2d46a68
9df5920
 
 
 
 
 
 
 
 
 
 
 
 
 
2d46a68
9df5920
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2d46a68
9df5920
 
 
 
 
 
 
b0d82be
9df5920
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: llama3
base_model: catallama/CataLlama-v0.2-Instruct-SFT-DPO-Merged
tags:
- llama
- llama-3
- catalan
model-index:
- name: CataLlama-v0.2-Instruct-SFT-DPO-Merged-GGUF
  results: []
datasets:
- catallama/Catalan-DPO-V2
- catallama/Catalan-Instruct-V2
language:
- ca
- en
pipeline_tag: text-generation
---

![](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-SFT/resolve/main/CataLlama-v0.2.png)

# CataLlama-v0.2-Instruct-SFT-DPO-Merged-GGUF

**CataLlama-v0.2-Instruct-SFT-DPO-Merged-GGUF** is a quantisation of [catallama/CataLlama-v0.2-Instruct-SFT-DPO-Merged](https://huggingface.co/catallama/CataLlama-v0.2-Instruct-SFT-DPO-Merged)

**This is an instruction fine-tuned model, optimised with DPO, proficient on the following tasks in Catalan**

- *Information extraction (suitable for RAG)*
- *Named Entity Recognition (NER)*
- *Translation from English to Catalan and Catalan to English*
- *Summarization - both short form and long form*
- *Sentiment analysis*
- *Chat*

**Model developers** [Laurentiu Petrea](https://www.linkedin.com/in/laurentiupetrea/) based on Llama-3 from Meta.

**Model Architecture** CataLlama is an auto-regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine-tuning (SFT) and direct preference optimisation (DPO) to align with human preferences for helpfulness and safety.

**License** The model uses the llama-3 license available at: [https://llama.meta.com/llama3/license](https://llama.meta.com/llama3/license)


## Benchmarks (for the bf16 model)

| Model              | CataLlama-v0.2-Instruct-DPO | CataLlama-v0.2-Instruct-SFT     | CataLlama-v0.2-Instruct-SFT-DPO-Merged     |
| ------------------ | --------------------------- | ------------------------------- | ------------------------------------------ |
| MMLU 5 shot        | 58.89                       | 59.35                           | **60.53**                                  |
| GSM8K CoT 8 shot   | 60.05                       | 76.04                           | **77.26**                                  |


**Please see the original model card for more details**

## Intended Use

**Note:** This model is not intended to beat benchmarks, but to demonstrate techniques for augmenting LLMs on new languages and preserve rare languages as part of our world heritage.

**Intended Use Cases** Llama 3 is intended for commercial and research use in English. Instruction tuned models are intended for assistant-like chat, whereas pretrained models can be adapted for a variety of natural language generation tasks.

**Out-of-scope** Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by the Acceptable Use Policy and Llama 3 Community License. Use in languages other than English**.

**Note: Developers may fine-tune Llama 3 models for languages beyond English provided they comply with the Llama 3 Community License and the Acceptable Use Policy.