AURA is a lightweight, modular framework for detecting anomalies in AES-128 encryption operations through multi-modal side-channel analysis. The framework is designed to identify malicious encryption blocks injected into embedded systems by analyzing timing, thermal, and optical side-channels. It supports both direct AES execution timing analysis and image-based signal extraction from optical camera datasets, making it suitable for real-time anomaly detection in embedded systems, SoCs, and FPGAs.
The framework implements three primary detection strategies:
- Threshold-based detection β Statistical anomaly identification using adaptive thresholds
- ML-based detection β Machine learning classifiers (Random Forest) for pattern recognition
- Hybrid detection β Fusion of both approaches for improved accuracy
- π AES-128 Encryption β ECB mode via PyCryptodome with timed execution analysis
- π Multi-Modal Feature Extraction:
- Timing features (execution time, deltas, rolling statistics)
- Optical features (mean pixel intensity from images via OpenCV)
- Fused multi-modal feature vectors for ensemble methods
- π― Three Detection Modes:
- Threshold Detection β Adaptive statistical thresholding (mean + 3-sigma)
- ML Detection β Random Forest classifier with temporal feature engineering
- Hybrid Detection β Logical OR combination of both methods
- βοΈ Multi-Core Parallel Processing β Leverages multiprocessing for high throughput
- π· Image-Based Dataset Support β Frame-by-frame optical signal extraction from directories
- π Automated Reporting β Excel (.xlsx) export with detailed metrics and statistics
- π Real-Time Ready β Low-latency inference designed for embedded deployment
AURA/
βββ README.md # Project documentation
βββ Requirements.txt # Python dependencies
βββ ML_Detect.py # Core ML-based anomaly detector (MAIN FRAMEWORK)
βββ Threshold_Detect.py # Core threshold-based detector (MAIN FRAMEWORK)
βββ thermal_adapter.py # Video/image adapter for side-channel signals (MAIN FRAMEWORK)
β
βββ Testing/ # Comprehensive testing and ablation studies
βββ _utils.py # Shared utilities (image loading, feature engineering)
βββ run_all.py # Orchestrator for running all experiments
β
βββ timing_threshold.py # Test: Timing-only threshold detection
βββ timing_ml.py # Test: Timing-only ML detection
β
βββ optical_threshold.py # Test: Optical-only threshold detection
βββ optical_ml.py # Test: Optical-only ML detection
β
βββ timing_optical_threshold.py # Test: Fused timing+optical threshold
βββ timing_optical_ml.py # Test: Fused timing+optical ML
The three primary files form the main detection framework:
| File | Purpose | Workflow |
|---|---|---|
| ML_Detect.py | ML-based anomaly classification | AES blocks β timing extraction β RF classifier β predictions |
| Threshold_Detect.py | Statistical threshold-based detection | AES blocks β timing measurement β adaptive threshold β binary detection |
| thermal_adapter.py | Video/image to anomaly detector bridge | Video/image input β frame signal extraction β call ML_Detect or Threshold_Detect β report |
These are designed for direct AES timing analysis on live encryption operations or arbitrary signal-based inputs.
Six specialized test scripts for modular sensor evaluation and ablation studies using image datasets:
| Category | Script | Input | Method |
|---|---|---|---|
| Timing Only | timing_threshold.py |
Simulated AES + synthetic timing | Threshold-based |
timing_ml.py |
Simulated AES + synthetic timing | Random Forest | |
| Optical Only | optical_threshold.py |
Real images (mean pixel intensity) | Threshold-based |
optical_ml.py |
Real images (mean pixel intensity) | Random Forest | |
| Timing + Optical (Fused) | timing_optical_threshold.py |
Images + AES simulation (dual-channel) | Threshold-based |
timing_optical_ml.py |
Images + AES simulation (dual-channel) | Random Forest |
- Python 3.7+
- Virtual environment (recommended:
venvorconda) - Dependencies:
pycryptodome,pandas,numpy,scikit-learn,opencv-python,openpyxl,psutil
-
Clone the Repository:
git clone https://github.com/nishant640/AURA.git cd AURA -
Create and Activate Virtual Environment:
python3 -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate
-
Install Dependencies:
pip install -r Requirements.txt
Run timing-based anomaly detection on simulated AES encryption:
python3 Threshold_Detect.pyPrompts (interactive):
- Number of plaintext blocks to encrypt
- Percentage of malicious blocks to inject (0-100)
- Number of CPU cores to use
Output: aes_anomaly_report.xlsx β Detailed report with timing measurements, detections, and metrics
Run Random Forest classification on AES timing features:
python3 ML_Detect.pySame interactive prompts as Threshold_Detect.py
Output: ml_aes_anomaly_report.xlsx β ML predictions with feature importance rankings
Process optical camera images or video files:
python3 thermal_adapter.py <path_to_video_or_image>Example:
python3 thermal_adapter.py "/path/to/optical_camera_footage.mp4"Prompts:
- Detection mode:
threshold,ml, orhybrid
Output: <filename>_thermal_report.xlsx β Frame-by-frame anomaly detection results
Use the Testing suite to evaluate individual sensors or sensor combinations on a directory of optical camera images.
Timing-Only Threshold Detection:
python3 Testing/timing_threshold.py /path/to/optical_images 20 4- Arg 1: Directory containing image files
- Arg 2: Anomaly injection rate (%, default: 20)
- Arg 3: Number of worker threads (default: 4)
Optical-Only ML Detection:
python3 Testing/optical_ml.py /path/to/optical_images 20Timing + Optical Fused Threshold:
python3 Testing/timing_optical_threshold.py /path/to/optical_images 20 4Automatically run all 6 test variants and generate consolidated metrics:
python3 Testing/run_all.py /path/to/optical_images 20 experiment_001Arguments:
image_directory: Directory containing optical camera images (.jpg, .png, .bmp, etc.)anomaly_percent: Injection rate for anomalies (default: 20)run_label: Experiment identifier (default: timestamp)
Output:
reports/experiment_001/experiment_summary.txtβ Consolidated comparison tablereports/experiment_001/metrics.jsonβ Machine-readable metricsreports/experiment_001/*.logβ Individual test logs
Sample Consolidated Comparison:
Accuracy Comparison:
Model Accuracy Precision Recall F1-Score
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
timing_ml 95.60% 94.20% 96.50% 95.30%
timing_optical_ml 98.10% 97.80% 98.40% 98.10%
optical_ml 92.30% 91.50% 93.10% 92.30%
timing_optical_threshold 85.40% 86.20% 84.60% 85.40%
timing_threshold 82.10% 80.90% 83.30% 82.10%
optical_threshold 78.50% 77.60% 79.40% 78.50%
Detection Statistics:
Model TP FP FN Total
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
timing_ml 191 10 9 210
timing_optical_ml 196 4 4 204
optical_ml 186 18 14 218
...
Purpose: Evaluate pure timing-side-channel effectiveness without optical data
-
timing_threshold.pyβ Adaptive statistical thresholding on AES execution timing- Features: delta_prev, delta_next, rolling_mean, rolling_std
- Threshold: mean + 3-sigma
- Best for: Resource-constrained environments
-
timing_ml.pyβ Random Forest trained on temporal timing patterns- Features: Same as threshold + engineered temporal features
- Model: RF (100 trees, random_state=42)
- Output: Feature importance rankings
Purpose: Evaluate optical side-channel (pixel intensity) effectiveness on real images
-
optical_threshold.pyβ Threshold detection on mean image brightness- Features: Extracted from OpenCV grayscale images
- Process: frame_mean > threshold β anomaly
- Simulates optical glitch signatures
-
optical_ml.pyβ Random Forest on optical spatial/temporal patterns- Features: delta_prev, delta_next, rolling statistics of pixel intensity
- Handles: Real image variations and noise
Purpose: Evaluate multi-modal fusion for improved detection
-
timing_optical_threshold.pyβ Logical OR of independent thresholds- Detection: (timing > threshold_t) OR (optical > threshold_o)
- Advantage: Either sensor can trigger alert
- Lower false negatives
-
timing_optical_ml.pyβ Single Random Forest on combined feature set- Features: All timing + all optical features in unified vector
- Advantage: ML learns cross-modal interactions
- Typically highest accuracy
- Expand dataset to include more sensor modalities (event cameras, EMI)
- Explore deeper neural networks (LSTM, Transformer) for temporal modeling
- Implement quantization for edge deployment
- Develop real-time streaming API
- Add support for hardware acceleration (GPU, FPGA)
- Create web dashboard for monitoring
- Expand to other encryption algorithms (RSA, ECC)
This work was supported under the McNair Junior Fellowship and Magellan Scholar Program at the University of South Carolina.
Special thanks to Rye Stahle-Smith for hardware testing and experimental support.
Developed by Nishant Chinnasami
Advisor: Dr. Rasha Karakchi
University of South Carolina