Joshua Legg (2103188) Joshua Legg

A tool for testing the robustness of machine learning algorithms against adversarial attacks

Project Abstract

Artificial intelligence is an area of computer science that is growing incredibly fast. This recent burst in interest can be attributed to the popularity of generative models such as ChatGPT. These models work by ?��learning?�� from vast amounts of data and then making calculated predictions. That data is vulnerable to manipulation from malicious actors, so the security of these models is called into question. This project is about a tool for researchers that will allow them to advance the security and stability of AI models that have been ?��poisoned?�� by malicious actors. Not all AI are the same, and different algorithms are used for specific purposes. Each Machine Learning algorithm has its strengths and weaknesses, so some algorithms are less robust against specific attacks than others. The project will deliver a tool allowing the user to evaluate multiple machine learning algorithms against data poisoning techniques. The tool returns multiple statistical metrics of the model to the user for analysis. The tool was produced by first implementing multiple machine learning algorithms using TensorFlow and Keras to test upon. After creating the models, a poisoning method is used against the models to simulate a data poisoning attack. Analysis of the metrics will indicate the effectiveness of the attack. Finally, these models are automated with a single interface that allows the user to change simulation parameters, including the percentage of data poisoned and model hyper-parameters. The overall deliverable of this project is a tool that allows the user to test the robustness of different machine learning algorithms against adversarial attacks such as data poisoning with changeable parameters. This project?��s tool will improve the current approach for testing models as it allows multiple models to be tested from one singular interface with adjustable parameters. It will also be expandable in the future to allow for possible improvements such as user-created algorithms, user-created attacks and datasets.

Keywords: Cyber Security, Machine Learning, Research Tool

 

 Conference Details

 

Session: Poster Session A at Poster Stand 81

Location: Sir Stanley Clarke Auditorium at Tuesday 7th 13:30 – 17:00

Markers: Betsy Dayana Marcela Chaparro Rico, Hassan Eshkiki

Course: MSci Computer Science, 3rd Year

Future Plans: I’m continuing studies