davanstrien HF staff commited on
Commit
0b72d8e
β€’
1 Parent(s): 1c43f88

Add link to ORPO paper in app.py overview

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -117,7 +117,7 @@ languages = list(datasets.keys())
117
  overview = """
118
  This Space shows an overview of Direct Preference Optimization (DPO) datasets available on the Hugging Face Hub across different languages.
119
 
120
- Recently ORPO has been demonstrated to be a powerful tool for training better performing language models.
121
 
122
  - ORPO training can be done using DPO style datasets
123
  - Is a key ingredient for training better models for every language having enough DPO datasets for different languages?
 
117
  overview = """
118
  This Space shows an overview of Direct Preference Optimization (DPO) datasets available on the Hugging Face Hub across different languages.
119
 
120
+ Recently [Odds Ratio Preference Optimization](https://huggingface.co/papers/2403.07691) ORPO has been demonstrated to be a powerful tool for training better performing language models.
121
 
122
  - ORPO training can be done using DPO style datasets
123
  - Is a key ingredient for training better models for every language having enough DPO datasets for different languages?