In this lecture we take a look at possible countermeasures against adversarial examples in deep learning models. In particular, we focus on the area of certified robustness, which gives theoretical guarantees of resistance, under specific hypotheses. We consider differential privacy, a technique of cryptographic inspiration originally conceived for database anonymization, and see how to apply it to get a certifiably robust deep neural network.
- Types of defense against adversarial examples
- Certified robustness approach
- Database anonymization: k-anonymity
- Differential privacy in database anonymization
- Differential privacy as a defense against adversarial examples
Survey on differential privacy:
- C. Dwork. Differential Privacy: A Survey of Results. TAMC 2008, pp. 1–19. Springer, 2008
Paper on certified robustness based on differential privacy:
- M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, S. Jana: Certified Robustness to Adversarial Examples with Differential Privacy. IEEE S&P 2019, pp. 656-673.