This course provides a deep examination of adversarial attacks against machine learning models and the methods used to defend against them. It covers advanced attack techniques including evasion, poisoning, and inference attacks, alongside sophisticated defense strategies like adversarial training and certified robustness. Learn to evaluate model security rigorously and implement cutting-edge techniques to build more secure AI systems. Suitable for engineers and researchers seeking to understand and mitigate vulnerabilities in machine learning deployments.
Prerequisites: Strong foundation in machine learning theory (classification, optimization), deep learning concepts (CNNs, RNNs), and proficiency in Python with ML libraries (e.g., TensorFlow/PyTorch, Scikit-learn).
Level: Advanced
Advanced Attack Implementation
Implement sophisticated evasion attacks (C&W, PGD) and data poisoning strategies.
Defense Mechanisms
Apply and analyze advanced defense techniques like adversarial training and certified defenses.
Model Inference Attacks
Understand and execute membership inference, attribute inference, and model stealing attacks.
Robustness Evaluation
Rigorously evaluate model security using standard benchmarks and adaptive attack strategies.
Domain-Specific Adversarial ML
Analyze adversarial threats specific to domains like computer vision and natural language processing.
Practical Implementation
Gain hands-on experience using frameworks like ART or CleverHans for attack and defense simulation.
© 2025 ApX Machine Learning