Before tackling more advanced techniques, it's essential to ensure our understanding of the core Reinforcement Learning (RL) principles is solid. This chapter serves as a focused refresher on the foundational concepts that underpin the field.
We will briefly revisit:
Finally, we will examine the inherent limitations of these tabular approaches, particularly when dealing with large or continuous state spaces. Understanding these constraints sets the stage for the function approximation methods explored in subsequent chapters. This review ensures we have a common ground before proceeding to Deep Q-Networks and Policy Gradient methods.
1.1 The Reinforcement Learning Problem Setup
1.2 Markov Decision Processes (MDPs) Recap
1.3 Value Functions and Bellman Equations
1.4 Tabular Solution Methods: Q-Learning and SARSA
1.5 Limitations of Tabular Methods
© 2025 ApX Machine Learning