Kamile Simkute (2036722) Kamile Simkute

Adversarial Attack using Deep Learning

Project Abstract

My motivation behind conducting this research stems from the understanding that adversarial attacks are techniques used to deceive models, such as by intentionally constructing misleading outputs. These attacks have practical applications, like causing a self-driving car to veer towards oncoming traffic. The utilization of adversarial techniques, exemplified by the Fast Gradient Sign Attack (FGSM), underscores the susceptibility of deep neural network predictions to manipulation with minimal input disturbances.This research aims to highlight the challenges inherent in deep learning, particularly the vulnerability to adversarial examples, which poses significant risks across technological domains such as computer vision and facial identification. It involves a review of recent findings on adversarial example methods and proposes categorizations of these techniques.My primary aim has been to implement one adversarial technique, namely the FGSM attack, to further investigate these vulnerabilities.Regarding the methods employed, I constructed an adversarial example using the FGSM, following guidance from a GitHub tutorial, and implemented it using Python and PyTorch. The PyTorch installation was facilitated through PyCharm’s integrated development environment (IDE).The main finding of this research reveals that the accuracy of models decreases with increasing epsilon values in the CIFAR-10 Dataset, mirroring trends observed in MNIST and indicating vulnerability. Additionally, successful generation of adversarial examples demonstrates the susceptibility of the CIFAR-10 dataset to perturbations.Overall, the key insight gleaned from this work is that the FGSM attack method effectively generates adversarial examples, highlighting the pressing need for robustness enhancements in neural network models.My work encompasses the plotting of accuracy versus epsilon to visually represent the model’s vulnerability, alongside generating adversarial examples for further analysis.In conclusion, the experiments conducted in this research shed light on the vulnerability of neural networks to adversarial attacks, emphasizing the critical importance of robustness in model design and training.

Keywords: Deep Learning, Cyber Security, neural network

 

 Conference Details

 

Session: Poster Session B at Poster Stand 63

Location: Sir Stanley Clarke Auditorium at Wednesday 8th 09:00 – 12:30

Markers: Lu Zhang, Muneeb Ahmad

Course: BSc Computer Science, 3rd Year

Future Plans: I’m looking for work