Search is not available for this dataset
image
imagewidth (px)
512
512
label
class label
4 classes
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
0images1
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Datasets for the Direct Preference for Denoising Diffusion Policy Optimization (D3PO)

Description: This repository contains the dataset for the D3PO method in this paper Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model. The d3po_dataset file pertains to the image distortion experiment of the anything-v5 model. The text2img_dataset comprises the images generated from the pretrained, preferred image fine-tuned, reward weighted fine-tuned and D3PO fine-tuned models in the prompt-image alignment experiment.

Source Code: The code used to generate this data can be found here.

Directory

  • d3po_dataset

    • epoch1
      • all_img
        • *.png
      • deformed_img
        • *.png
      • json
        • data.json (required for training)
      • prompt.json
      • sample.pkl(required for training)
    • epoch2`
    • ...
    • epoch5
  • text2img_dataset:

    • img
    • data_*.json
    • plot.ipynb
    • prompt.txt

Citation

@article{yang2023using,
  title={Using Human Feedback to Fine-tune Diffusion Models without Any Reward Model},
  author={Yang, Kai and Tao, Jian and Lyu, Jiafei and Ge, Chunjiang and Chen, Jiaxin and Li, Qimai and Shen, Weihan and Zhu, Xiaolong and Li, Xiu},
  journal={arXiv preprint arXiv:2311.13231},
  year={2023}
}
Downloads last month
36
Edit dataset card