Training Language Models to Follow Instructions with Human Feedback, Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul Christiano, Jan Leike, Ryan Lowe, 2022arXiv preprintDOI: 10.48550/arXiv.2203.02155 - Paper demonstrating PPO's use for aligning LLMs with human preferences (RLHF).
Fine-tune a LLaMA model with 🤗PEFT & 🤗TRL, Edward Beeching, Younes Belkada, Leandro von Werra, Sourab Mangrulkar, Lewis Tunstall, Kashif Rasul, 2023 (Hugging Face Blog) - Practical guide to applying PPO with TRL for LLM fine-tuning, including hyperparameter details.