Poisoning Attacks against Support Vector Machines, Battista Biggio, Blaine Nelson, and Pavel Laskov, 2012Proceedings of the 29th International Conference on Machine Learning (ICML) (Omnipress)DOI: arXiv:1206.6389 - This paper introduces early data poisoning attacks targeting support vector machines, demonstrating how an attacker can manipulate training data to degrade model performance.
Towards Deep Learning Models Resistant to Adversarial Attacks, Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu, 2018International Conference on Learning Representations (ICLR)DOI: 10.48550/arXiv.1706.06083 - This paper presents Projected Gradient Descent (PGD) as a strong adversarial attack and a robust adversarial training method to improve model resilience.
Membership Inference Attacks Against Machine Learning Models, Reza Shokri, Marco Stronati, Congzheng Song, and Vitaly Shmatikov, 2017IEEE Symposium on Security and Privacy (S&P) (IEEE)DOI: 10.1109/SP.2017.37 - This paper demonstrates a practical membership inference attack, allowing an attacker to determine if a data point was part of a model's training set.