Towards Evaluating the Robustness of Neural Networks, Nicholas Carlini and David A. Wagner, 20172017 IEEE Symposium on Security and Privacy (SP) (IEEE Computer Society)DOI: 10.1109/SP.2017.49 - Presents strong adversarial attacks (C&W attacks) for various $L_p$ norms ($L_0, L_1, L_2$) and highlights the significance of evaluating robustness by finding the minimum perturbation required for misclassification.
Certified Adversarial Robustness via Randomized Smoothing, Jeremy Cohen, Elan Rosenfeld, Zico Kolter, 2019Proceedings of the 36th International Conference on Machine Learning, Vol. 97 (PMLR)DOI: 10.48550/arXiv.1902.02923 - Introduces randomized smoothing as a general framework for constructing provably robust classifiers and provides a method for quantifying certified adversarial robustness.