Previous chapters focused primarily on unconstrained optimization problems typical in training many machine learning models. This chapter addresses situations requiring different approaches. You will learn about constrained optimization, where parameters must satisfy specific conditions, covering the theory (Lagrangian duality, KKT conditions) and practical algorithms like projected gradient methods. We will also introduce derivative-free optimization techniques, useful when gradient information is unavailable or unreliable. Finally, we will cover Bayesian optimization as a method for hyperparameter tuning and touch upon optimization strategies relevant to reinforcement learning.
7.1 Constrained Optimization Fundamentals
7.2 Lagrangian Duality and KKT Conditions
7.3 Projected Gradient Methods
7.4 Derivative-Free Optimization Overview
7.5 Bayesian Optimization for Hyperparameter Tuning
7.6 Optimization for Reinforcement Learning Policies
7.7 Practice: Implementing Projected Gradient Descent
© 2025 ApX Machine Learning