Skip to content

AllaAndreevna/Adversarial-attacks-in-ML

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 

Repository files navigation

Adversarial Attacks in Machine Learning

This repository presents experiments with adversarial attacks and basic defense methods in image classification models.
The work was conducted on the SVHN dataset using the YOLOv8 classification model.

Example of adversarial attack


📌 Project Description

Machine learning models, especially deep neural networks, are highly vulnerable to adversarial examples.
This project explores:

  • Implementing adversarial attacks:
    • FGSM (Fast Gradient Sign Method)
    • PGD (Projected Gradient Descent)
  • Testing the robustness of a YOLOv8 classifier under these attacks.
  • Applying basic defenses, such as adversarial training.

The goal is to demonstrate how adversarial perturbations can drastically reduce model accuracy and test different strategies to mitigate this effect.


⚠️ Limitations & Future Work

This research is not perfect and requires improvements:

  • Broader set of adversarial attacks (CW, DeepFool, AutoAttack or another ones).
  • More advanced defense methods (e.g., randomized smoothing, certified defenses).
  • Cleaner, production-ready code and experiments.

I am aware of these limitations and plan to continue refining and expanding this project in the future.

About

This is a research repository that represents adversarial attacks on models.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors