Adversarial Neighborhoods: Statistical analysis of adversarial attacks against image classification models
Artificial Neural Networks have achieved renowned success in automated image classification and many other tasks. These models are vulnerable to small "Adversarial" perturbations on samples which greatly disrupt classification output. In this work we describe methods used to generate adversarial examples and characteristics of these examples. We will also introduce statistical analysis of these examples and propose a definition that can be used to characterize and identify such examples.
Zoom: https://arizona.zoom.us/j/98593835992 Password: 021573