metadata
language:
- es
EQASpa 7b
This model is a fine-tuned version of Llama-2-7b-hf for the Extractive Question Answering task in spanish.
It was fine-tuned on the QuALES 2022 dataset training partition using LoRA for one epoch.
Prompt format
To use the model, the following prompting format should be applied:
### TEXTO:
{{Context document}}
### PREGUNTA:
{{Question}}
### RESPUESTA:
Evaluation
We evaluate the model on the test partition of the QuALES dataset, and compare it with one-shot prompting as a baseline.
Prompt | Model | Acc_exact | F_bertscore |
---|---|---|---|
one-shot prompting | zephyr-7b-beta | 0.025 | 0.614 |
one-shot prompting | Llama-2-13b-chat-hf | 0.192 | 0.700 |
default | EQASpa 7b | 0.225 | 0.713 |
Training procedure
The following bitsandbytes
quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
Framework versions
- PEFT 0.4.0