angelahzyuan commited on
Commit
1ea9434
1 Parent(s): 364a367

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,9 +8,9 @@ pipeline_tag: text-generation
8
  ---
9
  Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675)
10
 
11
- # Mistral7B-PairRM-SPPO-Iter3
12
 
13
- This model was developed using [Self-Play Preference Optimization](https://arxiv.org/abs/2405.00675) at iteration 3, based on the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) architecture as starting point. We utilized the prompt sets from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, splited to 3 parts for 3 iterations by [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset). All responses used are synthetic.
14
 
15
 
16
  While K = 5, this model uses three samples to estimate the soft probabilities P(y_w > y_l) and P(y_l > y_w). These samples include the winner, the loser, and another random sample. This approach has shown to deliver better performance on AlpacaEval 2.0 compared to the results reported in [our paper](https://arxiv.org/abs/2405.00675).
 
8
  ---
9
  Self-Play Preference Optimization for Language Model Alignment (https://arxiv.org/abs/2405.00675)
10
 
11
+ # Mistral7B-PairRM-SPPO
12
 
13
+ This model was developed using [Self-Play Preference Optimization](https://arxiv.org/abs/2405.00675), based on the [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) architecture as starting point. We utilized the prompt sets from the [openbmb/UltraFeedback](https://huggingface.co/datasets/openbmb/UltraFeedback) dataset, splited to 3 parts for 3 iterations by [snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset](https://huggingface.co/datasets/snorkelai/Snorkel-Mistral-PairRM-DPO-Dataset). All responses used are synthetic.
14
 
15
 
16
  While K = 5, this model uses three samples to estimate the soft probabilities P(y_w > y_l) and P(y_l > y_w). These samples include the winner, the loser, and another random sample. This approach has shown to deliver better performance on AlpacaEval 2.0 compared to the results reported in [our paper](https://arxiv.org/abs/2405.00675).