Chapter 1 reviewed foundational reinforcement learning concepts and the use of function approximation. While linear function approximators are useful, they often fall short when dealing with high-dimensional state spaces, such as raw pixel inputs from a screen or complex feature vectors. To address these limitations, we turn to the representational capacity of deep neural networks.
This chapter introduces Deep Q-Networks (DQN), a foundational algorithm that successfully combines Q-learning with deep neural networks to achieve strong performance on challenging tasks. We will examine the key components that enable stable and effective training, namely experience replay and the use of target networks.
Building upon the standard DQN algorithm, we will then analyze several important improvements developed to address its specific weaknesses:
You will gain an understanding of the mechanics behind these techniques and learn how to implement DQN and its primary variants.
© 2025 ApX Machine Learning