Pairwise Proximal Policy Optimization: Harnessing Relative Feedback for LLM Alignment Paper • 2310.00212 • Published Sep 30, 2023 • 2
Stabilizing RLHF through Advantage Model and Selective Rehearsal Paper • 2309.10202 • Published Sep 18, 2023 • 9
Aligning Language Models with Offline Reinforcement Learning from Human Feedback Paper • 2308.12050 • Published Aug 23, 2023 • 1
Secrets of RLHF in Large Language Models Part I: PPO Paper • 2307.04964 • Published Jul 11, 2023 • 28