Model Card for Latxa 13b
We introduce Latxa, a family of large language models for Basque ranging from 7 to 70 billion parameters. Latxa is based on Llama 2, which we continue pretraining on a new Basque corpus comprising 4.3M documents and 4.2B tokens. In our extensive evaluation, Latxa outperforms all previous open models we compare to by a large margin. In addition, it is competitive with GPT-4 Turbo in language proficiency and understanding, despite lagging behind in reading comprehension and knowledgeintensive tasks. Both the Latxa family of models, as well as our new pretraining corpora and evaluation datasets, are publicly available under open licenses. Our suite enables reproducible research on methods to build LLMs for low-resource languages
- 📒 Blog Post: Latxa: An Open Language Model and Evaluation Suite for Basque
- 📖 Paper: Latxa: An Open Language Model and Evaluation Suite for Basque
- 💻 Code: hitz-zentroa/latxa
Model Details
Model Description
Latxa is a family of Large Language Models (LLM) based on Meta’s LLaMA models. Current LLMs exhibit incredible performance for high-resource languages such as English, but, in the case of Basque and other low-resource languages, their performance is close to a random guesser. These limitations widen the gap between high- and low-resource languages when it comes to digital development. We present Latxa to overcome these limitations and promote the development of LLM-based technology and research for the Basque language. Latxa models follow the same architecture as their original counterparts and were further trained in Latxa Corpus v1.1, a high-quality Basque corpora.
The models are released in three sizes: 7B, 13B and 70B.
- Developed by: HiTZ Research Center & IXA Research group (University of the Basque Country UPV/EHU)
- Model type: Language model
- Language(s) (NLP): en, eu
- License: llama2
- Parent Model: meta-llama/Llama-2-13b
- Contact: [email protected]
Getting started
Use the code below to get started with the model.
from transformers import pipeline
pipe = pipeline("text-generation", model="HiTZ/latxa-13b-v1.2")
text = "Euskara adimen artifizialera iritsi da!"
pipe(text, max_new_tokens=50, num_beams=5)
>> [
{
'generated_text': 'Euskara adimen artifizialera iritsi da!\nEuskararen eta adimen artifizialaren arteko harremana aspaldikoa da,'
' baina azken urteotan aurrerapauso handiak eman dira arlo horretan'
}
]
Uses
Latxa models are intended to be used with Basque data; for any other language the performance is not guaranteed. Same as the original, Latxa inherits the LLaMA-2 License which allows for commercial and research use.
Direct Use
Latxa family models are pre-trained LLMs without any task-specific or instruction fine-tuning. That is, the model can either be prompted to perform a specific task or further fine-tuned for specific use cases.
Out-of-Scope Use
The model was not fine-tuned to follow instructions or to work as a chat assistant, therefore, this kind of usage is not tested nor recommended.
Bias, Risks, and Limitations
In an effort to alleviate the potentially disturbing or harmful content, Latxa has been trained on carefully selected and processed data which comes mainly from local media, national/regional newspapers, encyclopedias and blogs (see Latxa-Corpus below). Still, the model is based on LLaMA models and can potentially carry the same bias, risk and limitations.
Please see the LLaMA’s Ethical Considerations and Limitations for further information.
Training Details
Training Data
Our training corpus combines various existing datasets, as well as some new ones that we release with this work. We have prioritized quality over quantity when constructing our corpus, prioritizing high-quality data sources and applying a thorough deduplication and filtering process. In total, a 4.17B tokens corpus is used to train the model.
See more details in the Latxa Corpus dataset card.
Additionally, 500K documents of English data randomly selected from the Pile dataset were also included to avoid catastrophic forgetting.
Training Procedure
The training of Latxa was conducted using the GPT-Neox library. As infrastructure, we leveraged the CINECA HPC Leonardo computing cluster located in Italy, which is powered by 3456 nodes each containing 4x custom A100 64Gb GPUs. The models were trained for 10k steps with a sequence length of 4096 tokens and an effective batch size of 2M tokens, resulting in a total of 20B tokens (around 4 epochs). We used a cosine learning rate schedule, with a warm-up of 500 steps and decaying down to 3% of the peak learning rate. We set up the peak learning rate to be 1e-4. All other hyperparameters follow (Touvron et al., 2023).
Evaluation
We evaluated the models on zero-shot and few-shot settings on generative, multiple-choice and classification tasks. We used the basque partitions of each dataset.
Testing Data, Factors & Metrics
Testing Data
- Belebele (Bandarkar et al.): Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. We evaluated the model in a 5-shot fashion.
- X-StoryCloze (Lin et al.): XStoryCloze consists of the professionally translated version of the English StoryCloze dataset to 10 non-English languages. Story Cloze is a commonsense reasoning dataset which consists of choosing the correct ending to a four-sentence story. We evaluated the model in a 0-shot fashion.
- BasqueGLUE (Urbizu et al.): BasqueGLUE is a NLU benchmark for Basque. We evaluated the model in a 5-shot fashion on the following tasks:
- Data card: https://huggingface.co/datasets/orai-nlp/basqueGLUE.
- Tasks:
- BEC2016eu: Sentiment analysis on tweets about the 2016 Basque elections campaign.
- VaxxStance: Stance detection on tweets around the anti-vaccine movement.
- BTHCv2: Topic classification of news extracts with 12 categories.
- EpecKorrefBin: Correference detection task similar to WSC.
- QNLIeu: Q&A NLI built from the Basque Wikipedia.
- WiCeu: Basque Word-in-Context task.
- EusProficiency (Etxaniz et al., 2024): EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque.
- EusReading (Etxaniz et al., 2024): EusReading consists of 352 reading comprehension exercises (irakurmena) sourced from the same set of past EGA exams.
- EusTrivia (Etxaniz et al., 2024): EusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3% of the questions are elementary level (grades 3-6), while the rest are considered challenging.
- EusExams (Etxaniz et al., 2024): EusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidetza, the Basque Government, the City Councils of Bilbao and Gasteiz, and the University of the Basque Country (UPV/EHU).
- Data card: https://huggingface.co/datasets/HiTZ/EusExams
Metrics
For most of the task we used Accuracy, as they are framed as Multiple Choice questions. For the rest, particularly task from BasqueGLUE benchmark, we have used the following:
- Micro F1: BEC2016-eu and BHTCv2
- Macro F1: VaxxStance (favor & against)
Results
The model was evaluated using the LM Evaluation harness library from Eleuther AI. In order to reproduce our results please follow the instructions in Latxa's Github repository.
Model | Size | XStory | Belebele | BasGLUE | EusProf | EusRead | EusTrivia | EusExams | Avg |
---|---|---|---|---|---|---|---|---|---|
Random | 50.00 | 25.00 | 37.50 | 25.00 | 25.83 | 26.55 | 25.00 | 30.70 | |
GPT 3.5 Turbo | n/a | -- | 57.33 | 48.62 | 31.24 | 36.65 | 46.71 | 42.42 | -- |
GPT 4 Turbo | n/a | -- | 90.67 | 62.90 | 56.70 | 75.85 | 73.12 | 70.22 | -- |
XGLM | 7B | 57.71 | 23.88 | 41.47 | 22.96 | 24.43 | 26.53 | 24.59 | 32.51 |
BLOOM | 7B | 57.18 | 27.00 | 40.17 | 25.34 | 28.41 | 27.17 | 25.07 | 33.86 |
Mistral | 7B | 51.09 | 38.89 | 39.22 | 25.01 | 29.26 | 34.58 | 32.15 | 35.94 |
Llama 2 | 7B | 50.43 | 26.22 | 38.20 | 24.09 | 27.27 | 29.50 | 28.84 | 32.51 |
Latxa v1.1 | 7B | 65.45 | 37.33 | 52.56 | 30.26 | 25.00 | 42.16 | 33.82 | 40.94 |
mGPT | 13B | 55.39 | 25.00 | 37.56 | 25.00 | 24.15 | 27.17 | 25.73 | 32.14 |
Llama 2 | 13B | 50.63 | 32.00 | 38.98 | 25.90 | 28.98 | 33.53 | 29.66 | 34.36 |
Latxa v1.1 | 13B | 66.51 | 53.89 | 53.36 | 44.11 | 32.67 | 56.38 | 43.66 | 50.08 |
Mixtral | 8x7B | 52.55 | 50.44 | 45.00 | 26.43 | 37.50 | 42.51 | 39.87 | 41.97 |
Yi | 34B | 52.22 | 54.56 | 43.90 | 27.30 | 34.66 | 42.57 | 39.68 | 42.05 |
Llama 2 | 70B | 51.62 | 33.56 | 42.55 | 24.16 | 27.84 | 38.43 | 33.08 | 35.47 |
Latxa v1.1 | 70B | 70.55 | 71.67 | 59.74 | 60.65 | 50.57 | 62.45 | 51.90 | 61.08 |
Environmental Impact
Carbon emissions are estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
Model | Size | Time (GPU Hours) | Carbon Emitted (kg CO2 eq) |
---|---|---|---|
Latxa v1.1 | 7B | 952.5h | 124.47kg |
Latxa v1.1 | 13B | 2,518.0h | 329.06kg |
Latxa v1.1 | 70B | 30,266.0h | 3,955.17kg |
Total | - | 33,636.5h | 4,408,7kg |
- Hardware Type: HPC Cluster, 4x A100 64Gb nodes
- Hours used: 33,636.5h
- Compute cluster: CINECA HPC
- Compute Region: Italy
- Carbon Emitted: 4,408,7kg CO2 eq
Acknowledgements
This work has been partially supported by the Basque Government (IKER-GAITU project). It has also been partially supported by the Ministerio para la Transformación Digital y de la Función Pública - Funded by EU – NextGenerationEU within the framework of the project with reference 2022/TL22/00215335. The models were trained on the Leonardo supercomputer at CINECA under the EuroHPC Joint Undertaking, project EHPC-EXT-2023E01-013.
Citation
To cite our work, please use:
@misc{etxaniz2024latxa,
title={{L}atxa: An Open Language Model and Evaluation Suite for {B}asque},
author={Julen Etxaniz and Oscar Sainz and Naiara Perez and Itziar Aldabe and German Rigau and Eneko Agirre and Aitor Ormazabal and Mikel Artetxe and Aitor Soroa},
year={2024},
eprint={2403.20266},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
- Downloads last month
- 431
Model tree for HiTZ/latxa-13b-v1.2
Dataset used to train HiTZ/latxa-13b-v1.2
Collection including HiTZ/latxa-13b-v1.2
Evaluation results
- Accuracy (0-shot) on xstory_clozePaper65.510
- Accuracy (5-shot) on belebelePaper53.890
- Average scores (5-shot) on basque_gluePaper53.560
- Accuracy (5-shot) on eus_proficiencyPaper44.110
- Accuracy (5-shot) on eus_readingPaper32.670
- Accuracy (5-shot) on eus_triviaPaper56.380
- Accuracy (5-shot) on eus_examsPaper43.660