Hugging Face
Models
Datasets
Spaces
Posts
Docs
Solutions
Pricing
Log In
Sign Up
nebchi
's Collections
Fine Tuning Model
DPO Dataset
DPO Dataset
updated
Aug 8
한국어 DPO 데이터셋 모음
Upvote
-
maywell/ko_Ultrafeedback_binarized
Viewer
•
Updated
Nov 9, 2023
•
62k
•
63
•
28
kuotient/orca-math-korean-dpo-pairs
Viewer
•
Updated
Apr 5
•
193k
•
198
•
9
zzunyang/dpo_data
Viewer
•
Updated
Jan 26
•
126
•
62
SJ-Donald/orca-dpo-pairs-ko
Viewer
•
Updated
Jan 24
•
36k
•
87
•
7
Upvote
-
Share collection
View history
Collection guide
Browse collections