File size: 1,493 Bytes
4844b4a
 
7b34666
4844b4a
 
 
 
 
bd298e3
4844b4a
 
 
 
 
1cf33bf
 
 
037d6f6
 
 
 
 
 
3019e9c
037d6f6
 
 
 
1cf33bf
 
044170d
8c98afc
bd298e3
1cf33bf
 
 
bd298e3
ff357dc
 
 
bd298e3
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
base_model: Qwen/Qwen2-0.5B-Instruct
datasets: trl-lib/ultrafeedback-prompt
library_name: transformers
model_name: online-dpo-qwen2-4
tags:
- trl
- generated_from_trainer
- online-dpo
licence: license
---

# Model Card for online-dpo-qwen2-4

This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/ultrafeedback-prompt](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).

## Quick start

```python
from transformers import pipeline

question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="qgallouedec/online-dpo-qwen2-4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=500)[0]
print(output["generated_text"][1]["content"])
```

## Training procedure

[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/huggingface/huggingface/runs/8q6fzgyf)

This model was trained with Online DPO, a method introduced in [Direct Language Model Alignment from Online AI Feedback](https://huggingface.co/papers/2402.04792).

### Framework versions

- TRL: 0.12.0.dev0
- Transformers: 4.45.0.dev0
- Pytorch: 2.4.1
- Datasets: 3.0.0
- Tokenizers: 0.19.1