Collect live JavaScript file URLs from target domains using multiple reconnaissance tools.
| Tool | Source |
|---|---|
| gau | Fetch known URLs from AlienVault OTX, Wayback Machine, Common Crawl |
| katana | Active crawling with JS parsing |
| httpx | Filter for live (200 OK) URLs |
# Install Go tools
go install github.com/lc/gau/v2/cmd/gau@latest
go install github.com/projectdiscovery/katana/cmd/katana@latest
go install github.com/projectdiscovery/httpx/cmd/httpx@latest# Single domain
python3 js_grabber.py -d target.com -o output.txt
# Domain list
python3 js_grabber.py -dL domains.txt -o output.txt- Runs gau and katana in parallel against each domain
- Filters output to
.jsfiles only - Deduplicates all collected URLs
- Runs httpx to keep only live URLs returning 200 OK
- Saves clean, deduplicated results to the output file
A text file with one live JS URL per line, no duplicates:
https://target.com/static/app-9f3a2b.js
https://target.com/assets/vendor-chunk.js
https://cdn.target.com/js/main.bundle.js