Camera-based AI system that detects occupancy and monitors appliance states (lights, fans, monitors) to prevent energy waste in real-time.
- What is WattWatch?
- How It Works β System Overview
- AI Models Used
- Project Structure
- Key Features
- Installation & Setup
- Configuration
- Running the System
- Dashboard (Frontend)
- API Endpoints
- Energy Metrics Explained
- Privacy & Anonymization
- Alert System
- Roboflow Model Training Guide
WattWatch is an AI-powered energy monitoring system for smart buildings, offices, and classrooms. It uses a combination of:
- Computer Vision (YOLOv8) to detect if people are present in a room
- Custom Roboflow ML Models to detect the ON/OFF state of lights, ceiling fans, and monitors
- A real-time React dashboard to visualize room-level energy waste and send alerts
The core idea is simple: if no one is in the room but appliances are still ON β that's energy waste. WattWatch automates this detection, calculates the cost in real-time, and alerts facility managers via WhatsApp/SMS.
ββββββββββββββββββββββββ
IP Camera / Webcam βββΆ β FastAPI Backend β
β (api/main.py) β
ββββββββββ¬ββββββββββββββ
β Each Frame
ββββββββββββββββΌβββββββββββββββββββ
βΌ βΌ βΌ
ββββββββββββββββ βββββββββββββββββ βββββββββββββββββββ
β YOLOv8n.pt β β Roboflow API β β Privacy Filter β
β (Person Det.)β β (3 ML Models) β β (Face Blur) β
ββββββββ¬ββββββββ βββββββββ¬ββββββββ ββββββββββ¬βββββββββ
β β β
βΌ βΌ β
Person Count Light/Fan/Monitor β
ON or OFF status β
β β β
βββββββββββββββββββΌββββββββββββββββββ
β
βββββββββββββΌβββββββββββββββ
β Room State Engine β
β AlertManager β
β MicrozoneTracker β
βββββββββββββ¬βββββββββββββββ
β
ββββββββββββββββββΌβββββββββββββββββ
β WebSocket Stream to Dashboard β
β React (Vite) Frontend β
βββββββββββββββββββββββββββββββββββ
- Frame Capture β from IP camera stream or webcam
- Person Detection β YOLOv8 detects humans β count of people
- Appliance Detection β Roboflow API checks if Light/Fan/Monitor is ON or OFF (every N frames to reduce cost)
- Privacy Anonymization β Faces are auto-blurred using Haar cascade + pixelation before storage
- Microzone Tracking β Frame split into 4Γ4 grid, per-zone occupancy tracked for heatmaps
- Waste Detection β
person_count == 0AND any appliance isONβ "WASTE" state - AlertManager β debounced alerts sent via Twilio SMS / WhatsApp after configurable delay
- WebSocket Push β annotated frame + all metadata streamed to dashboard in real-time
WattWatch uses 4 models in total. Here is a complete breakdown:
| Property | Value |
|---|---|
| Framework | Ultralytics YOLOv8 |
| Model File | yolov8n.pt (also yolov8s.pt available) |
| Task | Object Detection β Person class only (class_id = 0 from COCO dataset) |
| Config Key | config.yaml β model.name |
| Default Confidence | 0.25 |
| Where it runs | Locally on your CPU/GPU |
| Purpose | Counts how many people are in the room |
yolov8n.ptis the currently active model (nano variant β fastest).yolov8s.pt(small variant) is also present for higher accuracy at a cost of speed.
How to switch: Edit config.yaml:
model:
name: yolov8s.pt # switch to small model for better accuracyCode location: src/detector.py β YOLODetector class
| Property | Value |
|---|---|
| Platform | Roboflow β Hosted Serverless Inference |
| Model ID | coms-room-light-63vyv/1 |
| Task | Classification / Detection β Is the light ON or OFF? |
| Training Data | Custom-labeled room images with lights on/off (trained on Roboflow) |
| API Endpoint | https://serverless.roboflow.com |
| Config Key | config.yaml β appliance.roboflow.light_model |
| Purpose | Detects if the ceiling/room light is switched ON or OFF |
Response parsing logic (from src/appliance_status.py):
- If predicted class contains
"on","light","glow","lamp","bright", or"tube"β Status: ON - If class contains
"off"β Status: OFF
| Property | Value |
|---|---|
| Platform | Roboflow β Hosted Serverless Inference |
| Model ID | ceiling-fan-detection-epfsk/1 |
| Task | Detection β Is the ceiling fan spinning (ON) or stopped (OFF)? |
| Training Data | Custom-labeled ceiling fan images (trained on Roboflow) |
| API Endpoint | https://serverless.roboflow.com |
| Config Key | config.yaml β appliance.roboflow.fan_model |
| Purpose | Detects the rotational state of ceiling fans |
Response parsing logic:
- If class contains
"on","fan","spinning","ceiling", or"rotor"β Status: ON - If class contains
"off"β Status: OFF
| Property | Value |
|---|---|
| Platform | Roboflow β Hosted Serverless Inference |
| Model ID | monitor_detection-uj19t-zqnlq/1 |
| Task | Detection β Is the monitor/screen turned ON or OFF? |
| Training Data | Custom-labeled monitor images (trained on Roboflow) |
| API Endpoint | https://serverless.roboflow.com |
| Config Key | config.yaml β appliance.roboflow.monitor_model |
| Purpose | Detects if desktop monitors are left powered on in empty rooms |
Response parsing logic:
- If class contains
"on","active","display","monitor","screen", or"power"β Status: ON - Otherwise β Status: OFF
| Model | Active? | Notes |
|---|---|---|
yolov8n.pt |
β YES | Configured in config.yaml, runs locally |
yolov8s.pt |
β No | Available on disk but not selected |
| Roboflow Light | β YES | Called every frame_skip=20 frames |
| Roboflow Fan | β YES | Called every frame_skip=20 frames |
| Roboflow Monitor | β YES | Called every frame_skip=20 frames |
MLApplianceDetector (MobileNetV2) |
β No | Fallback only, requires models/appliance_classifier.pt which is not present |
Summary: The system currently uses YOLOv8n for person detection and the 3 Roboflow hosted models for appliance status. All 3 Roboflow calls are made in parallel (via
ThreadPoolExecutor) to minimize latency.
watt-watch/
β
βββ main.py # CLI entry point (detect/live/benchmark/calibrate)
βββ config.yaml # Master configuration file
βββ requirements.txt # Python dependencies
βββ setup.py # Package setup
βββ yolov8n.pt # YOLOv8 Nano model (person detection) β ACTIVE
βββ yolov8s.pt # YOLOv8 Small model (alternative, not active)
β
βββ src/ # Core Python source code
β βββ __init__.py
β βββ detector.py # YOLOv8 person detection wrapper
β βββ tracker.py # Centroid-based multi-person tracker
β βββ appliance_status.py # Roboflow API calls (Light/Fan/Monitor)
β βββ appliance_detector.py # Rule-based fallback detector (brightness/edge analysis)
β βββ ml_appliance_detector.py # MobileNetV2 local ML detector (optional fallback)
β βββ alert_manager.py # Waste event tracking + Twilio SMS/WhatsApp alerts
β βββ microzone.py # 4Γ4 grid zone tracking + heatmap generation
β βββ privacy_filter.py # Face detection (Haar cascade) + anonymization
β βββ intensity_calibrator.py # Room brightness threshold calibration
β βββ smoothing.py # Temporal smoothing for detection signals
β βββ preprocessing.py # Frame preprocessing utilities
β βββ model_utils.py # Model download and path utilities
β βββ mqtt_manager.py # MQTT publish/subscribe for IoT integration
β βββ utils.py # FPS counter, video extractor, JSON logger
β βββ database/ # SQLite database layer
β
βββ api/
β βββ main.py # FastAPI backend (~75KB) β WebSocket, REST API
β
βββ dashboard-vite/ # React + Vite frontend dashboard
β βββ src/
β β βββ App.jsx # Main dashboard component (~920 lines)
β β βββ App.css # Dashboard styling
β β βββ main.jsx
β βββ package.json
β βββ vite.config.js
β
βββ scripts/
β βββ download_samples.py # Download sample test videos
β βββ extract_frames.py # Extract frames from videos
β βββ migrate_json_to_sqlite.py
β
βββ configs/ # Additional configuration files
βββ data/ # Test clips and raw data
β βββ clips/ # occupied.mp4, empty.mp4, quiet-reader.mp4
βββ output/ # Detection results, JSON logs
βββ logs/ # FPS logs, appliance debug logs
βββ models/ # Optional local ML model files
βββ docs/ # Documentation
βββ tests/ # Unit tests
β
βββ ENERGY_METRICS.md # Detailed energy calculation documentation
βββ test_detection.py # Manual detection tests
βββ test_appliance.py # Manual appliance detection tests
| Feature | Description |
|---|---|
| π§ Person Detection | YOLOv8n detects and counts people in real-time |
| π‘ Light Detection | Roboflow model classifies room lights as ON/OFF |
| π Fan Detection | Roboflow model detects spinning/stopped ceiling fans |
| π₯οΈ Monitor Detection | Roboflow model detects powered-on/off monitors |
| β‘ Energy Waste Alerts | SMS/WhatsApp alerts when room is empty but appliances are ON |
| π Privacy First | Automatic face anonymization (pixelation/blur) before any storage |
| πΊοΈ Microzone Heatmap | 4Γ4 grid zone tracking shows where people congregate |
| π Cost Calculation | Real-time cost/hour and cumulative waste cost in βΉ or $ |
| ποΈ Calibration Studio | Per-room brightness threshold tuning via visual dashboard |
| π‘ Multi-Room Support | Monitor up to 2 IP camera rooms simultaneously |
| ποΈ SQLite Logging | All waste events persisted in SQLite database |
| π WebSocket Streaming | Live annotated frames pushed to dashboard |
- Python 3.9 or higher
- Node.js 18+ (for dashboard)
- A Roboflow account with API key
- (Optional) CUDA GPU for faster YOLO inference
git clone <your-repo-url>
cd watt-watch
pip install -r requirements.txtThe key packages installed:
ultralytics>=8.0.0 # YOLOv8 (person detection)
opencv-python>=4.8.0 # Video processing
torch>=2.0.0 # Deep learning backend
inference-sdk>=1.0.0 # Roboflow API client
fastapi>=0.104.0 # Backend API server
uvicorn>=0.24.0 # ASGI server
websockets>=12.0 # Real-time streaming
pyyaml>=6.0 # Config file parsing
Open config.yaml and set your Roboflow API key:
appliance:
roboflow:
api_key: YOUR_ROBOFLOW_API_KEY_HERE
light_model: coms-room-light-63vyv/1
fan_model: ceiling-fan-detection-epfsk/1
monitor_model: monitor_detection-uj19t-zqnlq/1alerts:
twilio:
enabled: true
account_sid: YOUR_TWILIO_ACCOUNT_SID
auth_token: YOUR_TWILIO_AUTH_TOKEN
from_number: '+1xxxxxxxxxx'
to_number: '+91xxxxxxxxxx'cd dashboard-vite
npm installAll system behavior is controlled by config.yaml. Key sections:
# ββ Model selection ββββββββββββββββββββββββββββββββββ
model:
name: yolov8n.pt # Switch to yolov8s.pt for higher accuracy
confidence_threshold: 0.25
# ββ Detection settings βββββββββββββββββββββββββββββββ
detection:
frame_skip: 1 # Process every frame (increase for speed)
min_confidence: 0.25
# ββ Appliance wattage for cost calculation βββββββββββ
appliance:
enabled: true
frame_skip: 20 # Run Roboflow every 20 frames
wattage:
light: 40 # Watts per light bulb
ceiling_fan: 65 # Watts per ceiling fan
monitor: 35 # Watts per monitor
electricity_rate: 0.12 # USD per kWh
electricity_rate_inr: 6.5 # INR per kWh
# ββ Alert debouncing βββββββββββββββββββββββββββββββββ
alerts:
initial_delay_seconds: 60 # Wait 60s before first alert
repeat_interval_seconds: 600 # Repeat alert every 10 min
# ββ Privacy settings βββββββββββββββββββββββββββββββββ
privacy:
enabled: true
blur_method: pixelate # Options: pixelate, gaussian, solid
blur_level: 99
# ββ Microzone grid βββββββββββββββββββββββββββββββββββ
microzone:
enabled: true
rows: 4
cols: 4
decay: 0.98 # Heatmap decay factorProcess a video file:
python main.py detect data/test_clip.mp4Run live webcam detection:
python main.py liveRun live on a specific camera:
python main.py live --camera 0Run benchmark on test clips:
python main.py benchmarkRun intensity calibration on a room:
python main.py calibrate data/test_clip.mp4 --room classroom_1 --samples 30Check calibration status:
python main.py calibrate --statusProcess image (single frame):
python main.py detect test_img.jpg --output result.jpgStep 1: Start the FastAPI Backend
cd api
uvicorn main:app --host 0.0.0.0 --port 8000 --reloadBackend runs at: http://localhost:8000
API docs available at: http://localhost:8000/docs
Step 2: Start the React Dashboard
cd dashboard-vite
npm run devDashboard runs at: http://localhost:5173
Step 3: Connect a camera
In the dashboard, enter your IP camera stream URL (e.g., http://192.168.0.154:8080/video) and click CONNECT.
The dashboard (built with React + Vite) has 5 tabs:
- Live video feed from up to 2 IP cameras
- Person count, Light/Fan/Monitor status displayed per room
- WASTE_DETECTED alert banner when room is empty with appliances ON
- Privacy mode toggle (GHOST_MODE) β enables/disables face blur
- Real-time energy load and cumulative waste cost
- Annual energy projections β kWh/day, savings in INR/year, COβ/year
- Last 30 days savings report
- Per-room breakdown with cost and COβ metrics
- Privacy measures status (face anonymization, data retention)
- Stakeholder compliance commitments
- Data retention policy overview
- Visual real-time brightness meter for selected room
- Dark / Medium threshold sliders for day and night modes
- Drag sliders to tune thresholds and commit changes to
config.yaml - Shows classification: DARK / MEDIUM / BRIGHT based on live feed
- Browse the SQLite database schema
- View raw table data (waste events, detection logs)
- Export and inspect historical energy waste records
| Method | Endpoint | Description |
|---|---|---|
POST |
/api/camera/connect |
Connect a room camera (start streaming) |
POST |
/api/camera/disconnect |
Disconnect a room camera |
WS |
/ws/stream/{room_id} |
WebSocket for live frame streaming |
GET |
/api/energy/metrics |
Current energy metrics per room |
GET |
/api/energy/dashboard |
Annual projections and 30-day summary |
GET |
/api/alerts/events |
Recent waste alert events |
GET |
/api/alerts/status |
Room status + waste duration |
GET |
/api/calibration |
Get current threshold calibration |
POST |
/api/calibration |
Update brightness thresholds |
GET |
/api/privacy/assurance |
Privacy compliance report |
GET |
/api/database/info |
Database statistics |
GET |
/api/database/schema |
Database table schema |
GET |
/api/database/rows/{table} |
Browse table rows |
estimated_watts = (40W if Light is ON) + (65W if Fan is ON) + (35W if Monitor is ON)
cost_per_hour = (estimated_watts / 1000) Γ electricity_rate # in USD
cost_per_hour_inr = (estimated_watts / 1000) Γ 6.5 # in INR
cumulative_cost = cost_per_hour Γ (waste_duration_seconds / 3600)
is_waste = (person_count == 0) AND (light == "ON" OR fan == "ON" OR monitor == "ON")kwh_per_day = estimated_watts Γ 24 / 1000
inr_per_year = kwh_per_day Γ 365 Γ electricity_rate_inr
co2_per_year_kg = kwh_per_day Γ 365 Γ co2_factor (0.71 kg/kWh)
See ENERGY_METRICS.md for the complete calculation documentation.
WattWatch is designed to be privacy-first in compliance with institutional requirements:
- Haar Cascade face detection runs on every N frames
- Detected faces are pixelated (or gaussian-blurred) with a large padding to obscure the entire head region
- Raw images are NEVER stored by default (
privacy.storage.save_raw: false) - Only anonymized thumbnails are saved (for alert evidence)
- All processing happens locally β no raw video leaves the machine
privacy:
blur_method: pixelate # pixelate / gaussian / solid
pixelate_blocks: 12 # More blocks = finer pixelation
blur_level: 99 # For gaussian mode
skip_frames: 3 # Re-detect faces every 3 frames
storage:
save_raw: false # NEVER store raw video
save_anonymized: false # Only enable for auditingThe AlertManager watches each room for waste conditions:
- Waste detected β starts a timer
- After
initial_delay_seconds(default: 60s) β fires first alert - If waste continues, repeats every
repeat_interval_seconds(default: 600s = 10 min) - When room is occupied or appliances are OFF β resets the timer
- Twilio SMS β text message to facility manager
- Twilio WhatsApp β WhatsApp template message with room name and duration
- SQLite Database β event persisted to
data/wattwatch.db - JSON fallback β events saved to
output/waste_events.json
Alert message format:
β οΈ WATTWATCH ALERT
Energy waste detected in Room 101!
Duration: 5.2 mins
Lights: ON, Fans: ON, Mon: OFF
Please check the facility.
The 3 Roboflow models (light, fan, monitor) were trained using Roboflow's platform. Here's how they were set up:
-
Create a Roboflow account at app.roboflow.com
-
Create a new project β select
Object DetectionorClassification -
Upload images:
- For light model: collect images of your room with light ON and light OFF
- For fan model: collect images of ceiling fans spinning (ON) and still (OFF)
- For monitor model: collect images of monitors powered ON and OFF
-
Annotate β draw bounding boxes and assign class labels:
- Light model classes:
light-on,light-off(or similar) - Fan model classes:
fan-on,fan-off - Monitor model classes:
monitor-on,monitor-off
- Light model classes:
-
Train β Use Roboflow's auto-train feature (YOLOv8 recommended)
-
Get model ID β from the Roboflow dashboard, copy the
workspace/project/versionformat -
Update
config.yaml:
appliance:
roboflow:
api_key: YOUR_API_KEY
light_model: YOUR-WORKSPACE/YOUR-LIGHT-PROJECT/1
fan_model: YOUR-WORKSPACE/YOUR-FAN-PROJECT/1
monitor_model: YOUR-WORKSPACE/YOUR-MONITOR-PROJECT/1| Appliance | Roboflow Model ID |
|---|---|
| Light | coms-room-light-63vyv/1 |
| Ceiling Fan | ceiling-fan-detection-epfsk/1 |
| Monitor | monitor_detection-uj19t-zqnlq/1 |
Tip: The more diverse your training images (different rooms, lighting conditions, angles), the more accurate your model will be.
Test person detection on a single image:
python test_detection.pyTest appliance detection (light/fan) on a test image:
python test_appliance.pyRun detection with max frames limit:
python main.py detect data/clips/occupied.mp4 --max-frames 100| File | Contents |
|---|---|
output/detections.json |
Per-frame detection results (JSON) |
output/waste_events.json |
Waste alert event log (JSON) |
output/appliance_status.json |
Appliance ON/OFF history per frame |
output/benchmark_results.json |
Benchmark test results |
logs/fps.log |
Frame-by-frame FPS log |
logs/appliance_debug.log |
Raw Roboflow API response debug log |
data/wattwatch.db |
SQLite database (all events + detections) |
data/alerts/*.jpg |
Anonymized thumbnails for waste events |
- Fork the repository
- Create a feature branch:
git checkout -b feature/my-feature - Commit changes:
git commit -m 'Add my feature' - Push:
git push origin feature/my-feature - Open a Pull Request
This project is licensed under the MIT License.
- Ultralytics YOLOv8 β person detection backbone
- Roboflow β model training platform and inference API
- Twilio β SMS and WhatsApp alerting
- FastAPI β high-performance Python backend
- React + Vite β fast frontend tooling