Hands-on Practical: Training and Fine-tuning a Model
Was this section helpful?
Deep Learning, Ian Goodfellow, Yoshua Bengio, and Aaron Courville, 2016 (MIT Press) - Comprehensive coverage of deep learning fundamentals, including training neural networks, optimization algorithms, regularization techniques, and hyperparameter tuning.
Flux.jl Documentation, The Flux.jl Community, 2025 - Official documentation for the Flux.jl deep learning library, providing practical guidance on model construction, training, and utilities in Julia.
Dropout: A Simple Way to Prevent Overfitting Neural Networks, Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, 2014Journal of Machine Learning Research, Vol. 15 (JMLR) - Introduces Dropout, a regularization technique widely used to mitigate overfitting in neural networks by randomly dropping units during training.
Random Search for Hyper-Parameter Optimization, James Bergstra and Yoshua Bengio, 2012Journal of Machine Learning Research, Vol. 13 (JMLR) - Proposes random search as an efficient strategy for hyperparameter optimization, often outperforming grid search, especially in high-dimensional spaces.