--- base_model: Qwen/Qwen2-0.5B-Instruct datasets: trl-lib/ultrafeedback-prompt library_name: transformers model_name: online-dpo-qwen2-4 tags: - trl - generated_from_trainer - online-dpo licence: license --- # Model Card for online-dpo-qwen2-4 This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct) on the [trl-lib/ultrafeedback-prompt](https://huggingface.co/datasets/trl-lib/ultrafeedback-prompt) dataset. It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="qgallouedec/online-dpo-qwen2-4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=500)[0] print(output["generated_text"][1]["content"]) ``` ## Training procedure [Visualize in Weights & Biases](https://wandb.ai/huggingface/huggingface/runs/8q6fzgyf) This model was trained with Online DPO, a method introduced in [Direct Language Model Alignment from Online AI Feedback](https://huggingface.co/papers/2402.04792). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.45.0.dev0 - Pytorch: 2.4.1 - Datasets: 3.0.0 - Tokenizers: 0.19.1