Automation & Scaling
Let tools do the repetitive work while you focus on finding bugs
What You'll Discover
🎯 Why This Matters
The most successful hunters work efficiently, not exhaustively. Automated recon runs while you sleep. Continuous monitoring catches new assets before competitors see them. Scripting handles repetitive tests across hundreds of endpoints. Automation lets you scale your efforts beyond what manual testing allows.
🔍 What You'll Learn
- Automated reconnaissance pipelines (and how each part works)
- Continuous subdomain monitoring to catch new assets first
- Notification systems that alert you to findings
- Foundational scripting concepts for testing
- Nuclei for vulnerability scanning at scale
- Responsible automation practices
🚀 Your First Win
By the end of this chapter, you'll have an automated recon pipeline that discovers assets while you focus on actual testing - and you'll understand every line of code in it.
🔧 Your First Automated Pipeline
Chain tools together for automated discovery. Create this script and run it:
#!/bin/bash
# recon.sh - Automated reconnaissance pipeline
# Usage: ./recon.sh target.com
# --- WHAT THIS SCRIPT DOES ---
# 1. Takes a domain as input
# 2. Finds all subdomains
# 3. Checks which are live
# 4. Gets historical URLs
# 5. Saves everything to organized files
# --- THE CODE ---
TARGET=$1 # $1 means "first argument passed to script"
# Safety check: Make sure a target was provided
if [ -z "$TARGET" ]; then
echo "Usage: ./recon.sh target.com"
exit 1 # Exit with error code
fi
# Create output directory for this target
OUTPUT_DIR="./recon/$TARGET"
mkdir -p $OUTPUT_DIR # -p creates parent directories if needed
echo "[*] Starting recon for $TARGET"
echo "[*] Results will be saved to $OUTPUT_DIR"
# Step 1: Subdomain enumeration
echo "[+] Finding subdomains..."
subfinder -d $TARGET -silent -o $OUTPUT_DIR/subdomains.txt
SUBDOMAIN_COUNT=$(wc -l < $OUTPUT_DIR/subdomains.txt)
echo " Found $SUBDOMAIN_COUNT subdomains"
# Step 2: Probe for live hosts
echo "[+] Checking which hosts are alive..."
cat $OUTPUT_DIR/subdomains.txt | httpx -silent -o $OUTPUT_DIR/live_hosts.txt
LIVE_COUNT=$(wc -l < $OUTPUT_DIR/live_hosts.txt)
echo " Found $LIVE_COUNT live hosts"
# Step 3: Get historical URLs from Wayback Machine
echo "[+] Fetching historical URLs..."
echo $TARGET | waybackurls > $OUTPUT_DIR/wayback_urls.txt
WAYBACK_COUNT=$(wc -l < $OUTPUT_DIR/wayback_urls.txt)
echo " Found $WAYBACK_COUNT historical URLs"
# Step 4: Summary
echo ""
echo "=== RECON COMPLETE ==="
echo "Target: $TARGET"
echo "Subdomains: $SUBDOMAIN_COUNT"
echo "Live hosts: $LIVE_COUNT"
echo "Wayback: $WAYBACK_COUNT URLs"
echo "Output: $OUTPUT_DIR/"
To use this script:
# 1. Save the script as recon.sh
# 2. Make it executable:
chmod +x recon.sh
# 3. Run it:
./recon.sh target.com
# Results appear in ./recon/target.com/
Skills You'll Master
Shell Scripting
Write bash scripts that automate repetitive tasks
Pipeline Design
Chain tools together for efficient workflows
Continuous Monitoring
Set up automated scans that run on schedule
Alert Integration
Get notifications when automation finds something
Understanding Automation in Bug Bounty
"Automation handles quantity. You handle quality."
Why Automation Gives You an Edge
Consider this scenario: A company adds a new subdomain at 2 AM. A hunter with continuous monitoring catches it at 2:05 AM and starts testing. A hunter without automation might not discover it for days - by then, other hunters have already found the easy bugs.
Automation advantages:
- Speed: Be first to test new assets
- Coverage: Test hundreds of subdomains while you sleep
- Consistency: Never forget to check something
- Time savings: Spend time testing, not running repetitive commands
What to Automate vs What to Do Manually
Automate these (repetitive, scalable):
- Subdomain enumeration and monitoring
- Live host detection
- Screenshot collection
- Technology fingerprinting
- Known vulnerability scanning (Nuclei templates)
- Historical URL collection
Keep manual (requires human judgment):
- Business logic testing - automation can't understand application purpose
- Complex authentication flows - require understanding of session handling
- Chained vulnerabilities - need creative thinking to connect bugs
- Report writing - quality reports require human explanation
- Impact assessment - determining real-world consequences
Automation Techniques
Continuous Subdomain Monitoring
This script runs daily, compares today's subdomains with yesterday's, and alerts you to new discoveries:
#!/bin/bash
# monitor.sh - Alert on new subdomains
# Run daily via cron job
TARGET=$1
DATA_DIR="./monitor/$TARGET"
mkdir -p $DATA_DIR
# File paths
OLD_SUBS="$DATA_DIR/subs_previous.txt"
NEW_SUBS="$DATA_DIR/subs_current.txt"
DIFF_FILE="$DATA_DIR/subs_new.txt"
# Get current subdomains
echo "[*] Scanning $TARGET for subdomains..."
subfinder -d $TARGET -silent -o $NEW_SUBS
# First run? Just save the baseline
if [ ! -f "$OLD_SUBS" ]; then
cp $NEW_SUBS $OLD_SUBS
echo "[*] First run - baseline saved"
exit 0
fi
# Compare: Find subdomains in NEW that aren't in OLD
# comm -13 shows lines unique to file 2 (new subdomains)
comm -13 <(sort $OLD_SUBS) <(sort $NEW_SUBS) > $DIFF_FILE
# Check if we found anything new
if [ -s "$DIFF_FILE" ]; then # -s checks if file has content
NEW_COUNT=$(wc -l < $DIFF_FILE)
echo "[!] Found $NEW_COUNT NEW subdomains!"
echo ""
cat $DIFF_FILE
echo ""
# Optional: Send notification (uncomment and configure)
# notify_discord "New subdomains for $TARGET: $(cat $DIFF_FILE)"
else
echo "[*] No new subdomains found"
fi
# Update baseline for next run
cp $NEW_SUBS $OLD_SUBS
# --- TO RUN DAILY ---
# Add to crontab (run: crontab -e)
# 0 6 * * * /path/to/monitor.sh target.com >> /path/to/monitor.log 2>&1
#
# This runs at 6 AM every day
# Cron format: minute hour day month weekday command
Nuclei Vulnerability Scanning
Nuclei uses templates to scan for known vulnerabilities. It's like having thousands of security checks automated:
# INSTALLATION
go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest
# Update templates (run regularly - new vulns added often)
nuclei -update-templates
# What it does: Downloads latest vulnerability detection templates
# Templates cover: CVEs, misconfigs, exposed panels, takeovers, etc.
# ─────────────────────────────────────────────────────────
# SCANNING EXAMPLES
# Scan all live hosts with all templates
nuclei -l live_hosts.txt -o all_findings.txt
# Warning: This can take hours on large lists. Start targeted.
# Scan for subdomain takeovers only
nuclei -l live_hosts.txt -t takeovers/ -o takeovers.txt
# What it does: Checks for GitHub Pages, Heroku, S3, etc. takeovers
# Fast scan, high-value findings
# Scan for exposed admin panels
nuclei -l live_hosts.txt -t exposed-panels/ -o panels.txt
# What it does: Finds login pages, admin interfaces, dashboards
# Good for finding forgotten admin panels
# Scan for critical and high severity only
nuclei -l live_hosts.txt -severity critical,high -o critical_findings.txt
# Use when you want to focus on impactful issues
# Scan for specific CVEs
nuclei -l live_hosts.txt -t cves/2024/ -o cve_findings.txt
# Tests for known vulnerabilities from 2024
# ─────────────────────────────────────────────────────────
# INTERPRETING RESULTS
# Nuclei output format:
# [template-id] [severity] [protocol] [matched-url] [extracted-info]
#
# Example:
# [git-config-exposure] [medium] [http] [https://target.com/.git/config]
#
# This tells you:
# - What was found (git-config-exposure template matched)
# - Severity (medium)
# - Where (https://target.com/.git/config)
#
# IMPORTANT: Nuclei finds POTENTIAL issues
# Always verify findings manually before reporting
Notification Integration
Get alerts when automation finds something interesting:
# Discord Webhook Notification
# 1. Create webhook: Server Settings → Integrations → Webhooks → New Webhook
# 2. Copy the webhook URL
notify_discord() {
local MESSAGE="$1"
local WEBHOOK_URL="https://discord.com/api/webhooks/YOUR_WEBHOOK_ID/YOUR_TOKEN"
# Send POST request to Discord
curl -s -X POST "$WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d "{\"content\": \"$MESSAGE\"}"
}
# Usage in your scripts:
if [ -s "$DIFF_FILE" ]; then
notify_discord "🔔 New subdomains found for $TARGET: $(cat $DIFF_FILE | tr '\n' ' ')"
fi
# ─────────────────────────────────────────────────────────
# Slack Webhook (similar approach)
notify_slack() {
local MESSAGE="$1"
local WEBHOOK_URL="https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK"
curl -s -X POST "$WEBHOOK_URL" \
-H "Content-Type: application/json" \
-d "{\"text\": \"$MESSAGE\"}"
}
# ─────────────────────────────────────────────────────────
# Telegram Bot Notification
notify_telegram() {
local MESSAGE="$1"
local BOT_TOKEN="YOUR_BOT_TOKEN"
local CHAT_ID="YOUR_CHAT_ID"
curl -s "https://api.telegram.org/bot${BOT_TOKEN}/sendMessage" \
-d "chat_id=${CHAT_ID}" \
-d "text=${MESSAGE}"
}
# Tip: Keep webhook URLs in a separate config file, not in scripts
# source ~/.config/notify_config.sh
Automation Success Stories
🏆 First to a New Subdomain
A hunter's monitoring script detected a new subdomain (staging-api.target.com) at 3 AM. By 3:15 AM, automated Nuclei scans found debug mode enabled. By 4 AM, the hunter had written and submitted a report for exposed database credentials. $5,000 bounty - all because automation ran while they slept.
Lesson: Continuous monitoring + automated scanning = first-mover advantage on new assets.
🏆 Scaling to 50 Programs
A part-time hunter couldn't compete with full-time researchers manually. They built a pipeline that monitored 50 programs, ran recon nightly, and alerted on new subdomains or Nuclei findings. In 6 months, they earned $30,000+ - all from automated discoveries they manually verified and reported.
Lesson: Automation lets you monitor programs at scale. You become your own security team.
Responsible Automation Practices
⚠️ Rules for Automated Testing
1. Respect rate limits. Don't overwhelm targets with requests. Use reasonable thread counts (10-50, not 1000). Add delays if needed. Getting blocked helps no one.
2. Check program policies. Some programs prohibit automated scanning entirely. Others have specific rules about scan intensity. Read before running.
3. Don't scan out of scope. Ensure your automation only touches in-scope assets. A wildcard (*.target.com) doesn't mean you can scan their vendors.
4. Verify before reporting. Nuclei findings are potential issues, not confirmed vulnerabilities. Always manually verify that the issue exists and is exploitable before submitting a report.
Frequently Asked Questions
Do I need to know programming to automate?
Foundational bash scripting is enough to start - the examples in this chapter work as-is. You can copy, modify, and combine these scripts without deep programming knowledge. As you progress, learning Python opens more possibilities for custom tools. Start with bash, learn by modifying existing scripts, and grow from there.
Where should I run my automation?
VPS (recommended): A $5-10/month server from DigitalOcean, Linode, or Vultr runs 24/7, has stable bandwidth, and keeps your home IP separate from testing. Great for continuous monitoring.
Local machine: Fine for on-demand scans. Use cron jobs for scheduled tasks. Keep in mind your computer needs to be on and connected.
Cloud functions: AWS Lambda, Google Cloud Functions for lightweight periodic tasks. More complex to set up but scales well.
Won't automation get me blocked?
It can if you're aggressive. Best practices to avoid blocks:
- Use reasonable thread counts (start with 10-20)
- Add delays between requests (-delay flag in many tools)
- Use a VPS with a clean IP (residential IPs get blocked faster)
- Rotate through multiple programs rather than hammering one
- Passive recon (subfinder) doesn't touch the target, so it can't cause blocks
Can I report Nuclei findings directly?
No - always verify first. Nuclei templates can have false positives. Before reporting:
1. Manually visit the URL and confirm the issue
2. Understand what you're looking at (don't report "Nuclei said so")
3. Assess the actual impact (some findings are low/no impact)
4. Write a proper report explaining the vulnerability
Programs quickly lose patience with hunters who spam unverified scanner output.
What's a cron job?
Cron is a Linux scheduler that runs commands at specified times. To edit your cron schedule:
crontab -e
Format: minute hour day month weekday command
Examples:
0 6 * * * /path/to/script.sh - Runs daily at 6 AM
0 */4 * * * /path/to/script.sh - Runs every 4 hours
0 0 * * 0 /path/to/script.sh - Runs weekly on Sunday
🎯 You Can Automate Your Hunting!
Recon pipelines, continuous monitoring, Nuclei scanning, notifications - you now have the tools to scale your bug bounty efforts. Your automation runs while you sleep, catching new assets before competitors.
Ready to build your bug bounty career →