This repository presents experiments with adversarial attacks and basic defense methods in image classification models.
The work was conducted on the SVHN dataset using the YOLOv8 classification model.
Machine learning models, especially deep neural networks, are highly vulnerable to adversarial examples.
This project explores:
- Implementing adversarial attacks:
- FGSM (Fast Gradient Sign Method)
- PGD (Projected Gradient Descent)
- Testing the robustness of a YOLOv8 classifier under these attacks.
- Applying basic defenses, such as adversarial training.
The goal is to demonstrate how adversarial perturbations can drastically reduce model accuracy and test different strategies to mitigate this effect.
This research is not perfect and requires improvements:
- Broader set of adversarial attacks (CW, DeepFool, AutoAttack or another ones).
- More advanced defense methods (e.g., randomized smoothing, certified defenses).
- Cleaner, production-ready code and experiments.
I am aware of these limitations and plan to continue refining and expanding this project in the future.
