Building Your Methodology

A systematic approach to finding vulnerabilities consistently

ChecklistWorkflowPrioritization

What You'll Discover

🎯 Why This Matters

Random testing finds random bugs. A methodology finds bugs consistently. Having a checklist ensures you never miss common vulnerabilities, while structure helps you work efficiently. The best hunters all have their own methodology - now you'll build yours.

🔍 What You'll Learn

  • Creating a testing checklist
  • Feature-based vs vulnerability-based testing
  • Prioritizing what to test first
  • Tracking tested vs untested areas
  • Evolving your methodology over time

🚀 Your First Win

In 20 minutes, you'll have a structured testing approach that makes your hunting more effective.

Skills You'll Master

Systematic Testing

Never miss a vulnerability class through structured checklists

Prioritization

Focus your time on high-impact areas first

Progress Tracking

Know what you've tested and what remains

Continuous Improvement

Evolve your methodology based on what works

🔧 Your Testing Checklist (With Explanations)

Use this for every feature you test. Each item is explained so you understand what you're looking for:

# CORE TESTING CHECKLIST

[ ] Authentication Bypass
    # Can you access protected pages without logging in?
    # Try removing auth tokens, manipulating cookies
    # What happens if you visit /admin directly?

[ ] IDOR/BOLA (change IDs in all parameters)
    # IDOR = Insecure Direct Object Reference
    # BOLA = Broken Object Level Authorization (API term)
    # Both mean the same thing: changing IDs to access
    # other users' data (orders, profiles, documents)

[ ] Authorization (access other users' data)
    # Different from authentication - you're logged in
    # but can you do things you shouldn't?
    # User A accessing User B's private data

[ ] XSS (reflected, stored, DOM)
    # Can you inject JavaScript that executes?
    # Reflected: In URL parameters, shows in response
    # Stored: In database, affects other users
    # DOM: Client-side JS handles your input unsafely

[ ] SQL Injection
    # Database queries built with your input
    # Test: ' OR 1=1-- , sleep(5), error-based payloads
    # Signs: Database errors, time delays, different behavior

[ ] CSRF (state-changing actions)
    # Cross-Site Request Forgery
    # Can you trick a logged-in user into taking actions?
    # Missing CSRF tokens on password change, email change, etc.

[ ] Open Redirect
    # URL parameter controls where users go
    # /redirect?url=https://evil.com
    # Useful for phishing, OAuth token theft

[ ] Information Disclosure
    # Sensitive data exposed: API keys, internal IPs
    # Debug endpoints, verbose error messages, .git folders
    # Check /robots.txt, /sitemap.xml for hidden paths

[ ] Rate Limiting (on sensitive functions)
    # No limit on login attempts = brute force possible
    # No limit on password reset = account enumeration
    # No limit on OTP verification = bypass 2FA

[ ] Business Logic Flaws
    # Application-specific vulnerabilities
    # Buy negative quantities, skip steps in checkout
    # Apply discount codes multiple times

Save this: Check every box for every feature. Consistency finds bugs that random testing misses.

Why Methodology Works

The Problem With Random Testing

Without structure, you might spend hours on one feature while completely ignoring another that has critical vulnerabilities. You test XSS on login but forget to check the profile page. You find one IDOR but miss five others because you didn't systematically check all endpoints. Random testing produces random results - sometimes you get lucky, often you don't.

The Power of Systematic Testing

A methodology ensures coverage. When you have a checklist, you know that every feature gets tested for every vulnerability class. You won't accidentally skip the payment endpoint because you got distracted by something shiny on the profile page. Consistent effort produces consistent results.

Two Testing Approaches

"A good methodology is a living document - improve it with every target."

Feature-Based Testing

Pick a feature, test every vulnerability type on it before moving on. This approach helps you understand the application deeply.

# FEATURE-BASED TESTING
# Example: User Profile Feature

STEP 1: Map all endpoints related to profiles
  GET  /api/users/me          # Your own profile
  GET  /api/users/{id}        # Any user's profile
  PUT  /api/users/me          # Update your profile
  POST /api/users/me/avatar   # Upload profile picture
  GET  /api/users/me/settings # Account settings

STEP 2: Run through your checklist on this feature
  [x] IDOR: Can I GET /api/users/OTHER_USER_ID?
  [x] IDOR: Can I PUT to /api/users/OTHER_USER_ID?
  [x] XSS: Inject payloads in name, bio, location fields
  [x] Mass Assignment: Add "role": "admin" to PUT request
  [x] File Upload: What happens with .php, .svg, huge files?
  [x] Info Disclosure: Does response include sensitive fields?

STEP 3: Move to next feature
  Now test: payments, messaging, settings, etc.

WHY THIS WORKS:
- Deep understanding of one area before moving
- Finds complex bugs that span multiple endpoints
- Builds mental model of how the feature works

Vulnerability-Based Testing

Pick a vulnerability type, test every feature for it. This approach is efficient for systematic coverage.

# VULNERABILITY-BASED TESTING
# Example: Testing for IDOR across the entire application

STEP 1: Find all endpoints with IDs
  /api/orders/{id}
  /api/documents/{id}
  /api/invoices/{id}
  /api/users/{id}/data
  /api/messages/{id}
  /api/projects/{id}

STEP 2: Test IDOR on each one systematically
  Requirements:
  - Create 2 accounts (Account A and Account B)
  - Create resources in both accounts

  Test each endpoint:
  - Can Account A access Account B's orders?
  - Can Account A modify Account B's documents?
  - Test GET, PUT, PATCH, DELETE methods
  - Check both numeric IDs and UUIDs

STEP 3: Move to next vulnerability type
  Now test: XSS everywhere, then CSRF everywhere, etc.

WHY THIS WORKS:
- Efficient for one specific vulnerability class
- You get into "IDOR hunting mode" - pattern recognition
- Easy to track: "I tested all IDOR, moving to XSS"

Which approach should you use?

Most hunters combine both. Use feature-based when learning a new target to understand how it works. Use vulnerability-based when you want to ensure complete coverage of one bug class. There's no single right answer - develop what works for your style.

High-Impact Areas to Prioritize

Not all features are equal. These areas have the highest likelihood of critical vulnerabilities and the biggest payouts:

# PRIORITIZED TESTING ORDER (highest impact first)

1. Authentication/Login flows
   Why: Account takeover = Critical severity
   Test: Password reset, OAuth, session handling

2. Password reset functionality
   Why: Often has token leakage, enumeration, bypass
   Test: Token in URL, weak tokens, no rate limiting

3. Payment/financial features
   Why: Direct financial impact, always high severity
   Test: Price manipulation, IDOR on transactions

4. Admin/settings panels
   Why: Privilege escalation potential
   Test: Can regular users access admin endpoints?

5. File upload functions
   Why: Can lead to RCE, stored XSS, path traversal
   Test: Extension bypass, content-type manipulation

6. API endpoints with user IDs
   Why: IDOR goldmine - developers forget auth checks
   Test: Every endpoint that takes an ID parameter

7. Export/download features
   Why: Often leak sensitive data, SSRF potential
   Test: What data is included? Can you export others' data?

8. Invitation/sharing systems
   Why: Access control complexity = bugs
   Test: Can you escalate shared permissions?

9. OAuth integrations
   Why: Token leakage, account linking issues
   Test: redirect_uri manipulation, state parameter

10. Newly released features
    Why: Less tested, higher bug density
    Where: Check changelogs, blog posts, app updates

Tracking Your Progress

Good notes separate effective hunters from everyone else. When you return to a target months later, your notes tell you exactly what you've done and what remains.

Note Template

# Target: company.com
# Date Started: 2024-01-15
# Last Updated: 2024-01-22

## Tech Stack (discovered during recon)
- Backend: Ruby on Rails (X-Powered-By header)
- Database: PostgreSQL (error messages)
- Frontend: React (source code patterns)
- CDN: Cloudflare

## Features Tested
- [x] User registration
- [x] Login/authentication
- [x] Password reset
- [ ] User profiles
- [ ] Payment system
- [ ] API v2 endpoints (newly discovered)

## Interesting Findings (not yet reported)
- Profile page returns email in JSON even for other users
- Password reset token appears in URL (potential leak via Referer)
- Debug endpoint at /api/debug returns stack traces
- Rate limiting bypassed with X-Forwarded-For header

## Reported Bugs
- #12345: IDOR in /api/orders/{id} (P2) - $500 - Resolved
- #12346: Stored XSS in bio field (P3) - Pending triage

## To Investigate
- [ ] Check if password reset tokens are guessable
- [ ] Test file upload on profile pictures
- [ ] Enumerate /api/v2/ endpoints (found in JS)
- [ ] Check admin panel at /internal/ (referenced in HTML comment)

## Notes
- Company uses Stripe for payments - limited scope there
- Mobile app uses same API - test via proxy

Frequently Asked Questions

How detailed should my methodology be?

Start with a 10-item checklist you actually use. A simple checklist executed consistently beats a 200-item checklist you ignore. Add items when you learn new techniques that find bugs. Remove items that never produce results for you. Your methodology should evolve based on what works.

Should I automate or test manually?

Both, for different purposes. Automate reconnaissance, subdomain enumeration, and simple vulnerability checks (missing headers, known CVEs). Manual testing finds business logic flaws, complex authentication issues, and chained vulnerabilities that scanners miss. Automation gives you breadth; manual testing gives you depth. The most effective hunters combine both.

What tools should I use to track my testing?

Whatever you'll actually use. Some hunters use Notion, Obsidian, or dedicated note apps. Others use plain text files in a git repo. Some use spreadsheets. The best system is one you'll maintain. Start with something simple and evolve it as you discover what information you need to track.

How do I know when I'm done testing a target?

You're never truly "done" - applications change constantly. But you can consider a target adequately tested when: you've covered all features with your checklist, you've tested all high-priority areas, and you're finding diminishing returns (spending hours without new findings). At that point, move on but schedule periodic revisits, especially after the target announces updates.

🎯 You Have a Methodology!

You now have a structured approach to testing: a comprehensive checklist with explanations, two testing strategies, prioritization guidance, and a tracking system. Consistent methodology produces consistent results.

Checklist Prioritization Tracking

Ready to learn quick-win vulnerabilities

Knowledge Validation

Demonstrate your understanding to earn points and progress

1
Chapter Question

What does 'P1' typically indicate in bug bounty severity ratings?

1
Read
2
Validate
3
Complete

Ready to track your progress?

Create a free account to save your progress, earn points, and access 170+ hands-on cybersecurity labs.

Start Learning Free
Join 5,000+ hackers learning cybersecurity with hands-on labs. Create Account