Deep Learning, Ian Goodfellow, Yoshua Bengio, and Aaron Courville, 2016 (MIT Press) - This foundational textbook provides comprehensive coverage of neural network architectures, forward and backward propagation, loss functions, and various gradient descent optimization techniques.
Neural Networks and Deep Learning, Michael Nielsen, 2015 (Determination Press) - This online book offers an accessible, step-by-step explanation of neural network fundamentals, including forward propagation, loss calculation, backpropagation, and gradient descent, building concepts from first principles.
Derivatives, Backpropagation, and Vectorization, Andrej Karpathy, Justin Johnson, and Fei-Fei Li, 2023 (Stanford University) - These Stanford lecture notes provide a clear, concise, and mathematically grounded explanation of backpropagation, its role in calculating gradients for neural network training, and the overall optimization process.