metadata
language:
- en
license: gemma
datasets:
- openbmb/UltraFeedback
pipeline_tag: text-generation
model-index:
- name: Gemma-2-9B-It-SPPO-Iter2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: ENEM Challenge (No Images)
type: eduagarcia/enem_challenge
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 73.69
name: accuracy
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BLUEX (No Images)
type: eduagarcia-temp/BLUEX_without_images
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 63
name: accuracy
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: OAB Exams
type: eduagarcia/oab_exams
split: train
args:
num_few_shot: 3
metrics:
- type: acc
value: 53.12
name: accuracy
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 RTE
type: assin2
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 94.07
name: f1-macro
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Assin2 STS
type: eduagarcia/portuguese_benchmark
split: test
args:
num_few_shot: 15
metrics:
- type: pearson
value: 78.28
name: pearson
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: FaQuAD NLI
type: ruanchaves/faquad-nli
split: test
args:
num_few_shot: 15
metrics:
- type: f1_macro
value: 77.46
name: f1-macro
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HateBR Binary
type: ruanchaves/hatebr
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 87.65
name: f1-macro
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: PT Hate Speech Binary
type: hate_speech_portuguese
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 71.13
name: f1-macro
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: tweetSentBR
type: eduagarcia/tweetsentbr_fewshot
split: test
args:
num_few_shot: 25
metrics:
- type: f1_macro
value: 69.4
name: f1-macro
source:
url: >-
https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=UCLA-AGI/Gemma-2-9B-It-SPPO-Iter2
name: Open Portuguese LLM Leaderboard
Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675)
Gemma-2-9B-It-SPPO-Iter2
This model was developed using Self-Play Preference Optimization at iteration 2, based on the google/gemma-2-9b-it architecture as starting point. We utilized the prompt sets from the openbmb/UltraFeedback dataset, splited to 3 parts for 3 iterations by snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset. All responses used are synthetic.
Terms of Use: Terms
Links to Other Models
Model Description
- Model type: A 8B parameter GPT-like model fine-tuned on synthetic datasets.
- Language(s) (NLP): Primarily English
- License: Apache-2.0
- Finetuned from model: google/gemma-2-9b-it
AlpacaEval Leaderboard Evaluation Results
Model | LC. Win Rate | Win Rate | Avg. Length |
---|---|---|---|
Llama-3-8B-SPPO Iter1 | 48.70 | 40.76 | 1669 |
Llama-3-8B-SPPO Iter2 | 50.93 | 44.64 | 1759 |
Llama-3-8B-SPPO Iter3 | 53.27 | 47.74 | 1803 |
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- eta: 1000
- per_device_train_batch_size: 8
- gradient_accumulation_steps: 1
- seed: 42
- distributed_type: deepspeed_zero3
- num_devices: 8
- optimizer: RMSProp
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_train_epochs: 1.0
Citation
@misc{wu2024self,
title={Self-Play Preference Optimization for Language Model Alignment},
author={Wu, Yue and Sun, Zhiqing and Yuan, Huizhuo and Ji, Kaixuan and Yang, Yiming and Gu, Quanquan},
year={2024},
eprint={2405.00675},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the 🚀 Open Portuguese LLM Leaderboard
Metric | Value |
---|---|
Average | 74.2 |
ENEM Challenge (No Images) | 73.69 |
BLUEX (No Images) | 63 |
OAB Exams | 53.12 |
Assin2 RTE | 94.07 |
Assin2 STS | 78.28 |
FaQuAD NLI | 77.46 |
HateBR Binary | 87.65 |
PT Hate Speech Binary | 71.13 |
tweetSentBR | 69.40 |