Avoiding Duplicates & N/A

Maximizing your acceptance rate and minimizing wasted effort

StrategyTimingTarget Selection

What You'll Discover

🎯 Why This Matters

Nothing is more frustrating than spending hours on a bug only to get "Duplicate" or "Not Applicable." Understanding why reports get rejected helps you hunt smarter. The most effective hunters have high signal (valid reports) and low noise (duplicates/N/A).

🔍 What You'll Learn

  • Why reports get marked duplicate
  • Why reports get marked N/A
  • Strategies to reduce rejection rate
  • When to hunt on new vs established programs
  • Learning from rejections

🚀 Your First Win

In 20 minutes, you'll have strategies to maximize your valid report rate.

Skills You'll Master

Strategic Selection

Choose programs where you can compete effectively

Quality Assessment

Evaluate findings before submitting

Rejection Analysis

Learn from feedback to improve future reports

Signal Optimization

Build a strong valid-to-total report ratio

Understanding Key Terms

Signal

Your ratio of accepted (valid) reports to total submitted reports. High signal means most of your reports are valid, not duplicates or N/A. Platforms use signal to decide private program invitations. Aim for 70%+ signal.

Duplicate

Someone else reported the same vulnerability before you. The first reporter gets credit; subsequent reports are marked duplicate. Duplicates lower your signal but are inevitable - the goal is minimizing them.

N/A (Not Applicable)

The report doesn't qualify as a valid security vulnerability. Reasons include: out of scope, no real impact, intended behavior, or known/accepted risk. N/A hurts signal more than duplicates because it suggests misunderstanding.

Valid/Resolved

The report was accepted as a legitimate security issue. It may be triaged (confirmed), resolved (fixed), or awarded (paid). Valid reports build your signal and reputation, leading to private program invitations.

🔧 Pre-Submission Checklist

Ask yourself these questions before clicking submit:

# SCOPE CHECK
[ ] Is this asset explicitly listed in scope?
    # If scope says "app.company.com" but you found the bug
    # on "api.company.com", verify api is also in scope
[ ] Is this vulnerability type accepted?
    # Some programs exclude certain vuln types
    # Check the "Exclusions" or "Out of Scope" section

# IMPACT CHECK
[ ] Have I demonstrated actual security impact?
    # "Missing header" alone isn't enough
    # Show what an attacker can DO with this
[ ] Can I reproduce this reliably?
    # If you can't reproduce it twice, don't submit
[ ] Would a reasonable security team care about this?
    # Think from their perspective - is this worth fixing?

# QUALITY CHECK
[ ] Is this a known CVE in a dependency they control?
    # "Your server runs Apache 2.4.49" isn't helpful
    # unless you can exploit it
[ ] Does exploitation require unrealistic user interaction?
    # "User must paste this 500-char payload" won't fly

# If ANY answer is "no" or "unsure" - reconsider submitting.

Remember: One valid report is worth more than ten rejected ones. Protect your signal.

Understanding Rejections

"Every rejection is data. Learn why and adapt your strategy."

Why Reports Get Marked Duplicate

# COMMON REASONS FOR DUPLICATES

1. Popular programs attract hundreds of hunters
   # The most famous programs (Google, Meta) have thousands
   # of people testing. Easy bugs get found immediately.

2. Obvious bugs found quickly after launch
   # When a program launches, experienced hunters jump in
   # and find surface-level bugs within hours.

3. Testing common vulnerabilities on obvious endpoints
   # /login, /reset-password, /api/user - everyone tests these
   # The XSS in the main search box? Already found.

4. Not going deep enough into the application
   # Surface testing finds surface bugs (that others found)
   # Deep testing finds unique bugs that are still there.

# HOW TO REDUCE DUPLICATES

✓ Hunt on newer or less popular programs
  # New programs haven't been picked clean yet
  # Smaller platforms (Intigriti) have less competition

✓ Test less obvious functionality
  # Admin panels, premium features, edge cases
  # Features that require setup to test

✓ Go deeper into authentication and business logic
  # These bugs require understanding the application
  # Automation can't find them, so there's less competition

✓ Don't rely solely on automated scanners
  # Everyone runs the same scanners
  # Scanner findings = high duplicate rate

✓ Specialize in a niche
  # Mobile apps, APIs, specific tech stacks
  # Fewer hunters = fewer duplicates

Why Reports Get Marked N/A

# OUT OF SCOPE
Testing *.company.com when scope says app.company.com only
# Solution: Read scope carefully before testing

Third-party services the company doesn't control
# Zendesk, Intercom, analytics services - not their code
# Solution: Check if the asset is actually owned by the company

Staging/development environments explicitly excluded
# Often listed as out of scope for a reason

# NO REAL IMPACT
Self-XSS (only affects the person entering it)
# You can't exploit anyone else - it's not a vulnerability

Missing security headers without demonstrated exploit
# "Missing X-Frame-Options" alone isn't a bug
# Need to show exploitable clickjacking on a sensitive page

Theoretical attacks without proof
# "An attacker could potentially..." - show what you CAN do

# INTENDED BEHAVIOR
"Bug" is actually a documented feature
# What looks wrong might be by design
# Check documentation before reporting

Rate limiting that seems weak to you
# If there's ANY rate limiting, they likely considered it
# Need to show actual bypass, not "I think it should be lower"

# KNOWN/ACCEPTED RISK
Company is aware and chose to accept the risk
# Business decision to leave as-is
# Nothing you can do about this one

Strategies to Increase Acceptance

Strategic Target Selection

# TIMING-BASED STRATEGIES

New programs (first 30 days)
  Why: Less competition, more undiscovered bugs
  How: Filter by launch date, check "New" sections
  Watch: Platform announcements for new programs

Programs with recent scope updates
  Why: New assets = fresh hunting ground
  How: Follow programs, check for announcements
  Watch: "Scope expanded to include..." notifications

Recently updated applications
  Why: New code = new bugs
  How: Check company blogs, changelogs, app store updates
  Watch: Feature announcements, version updates

# PLATFORM-BASED STRATEGIES

Less popular platforms
  Why: Smaller researcher pool = less competition
  Options: Intigriti, YesWeHack, Synack
  Tradeoff: Fewer programs, but less crowded

Industries with fewer hackers
  Why: Healthcare, finance, retail - less "sexy" but lucrative
  Bonus: Often have weaker security practices
  Note: May require compliance understanding (HIPAA, etc.)

Testing Approach

# GO BEYOND THE OBVIOUS

Test mobile apps instead of web only
  Why: Fewer hunters test mobile, more unique findings
  How: Set up proxy, intercept app traffic, test APIs

Focus on new features (check changelogs)
  Why: New code hasn't been tested as thoroughly
  How: Follow company engineering blogs, app updates

Business logic over technical vulnerabilities
  Why: Scanners can't find these, requires human thinking
  Examples: Price manipulation, workflow bypass, permission issues

API testing (often overlooked)
  Why: APIs expose functionality without UI protections
  How: Map all endpoints, test authorization on each

# TIMING YOUR SUBMISSIONS

Report quickly when you find something solid
  Why: Every hour delay increases duplicate risk
  Tip: Don't wait for "perfect" writeup if bug is clear

Balance speed with quality
  Why: Rushed reports get rejected for lack of detail
  Tip: Include reproduction steps and impact, even if brief

Learning From Rejections

Duplicate Pattern Analysis

If you're getting duplicates on obvious bugs (XSS in search, IDOR on main endpoints), you're testing too broadly and competing with everyone. Solution: Go deeper on fewer targets. Learn one application inside and out. Test features that require setup, premium accounts, or specific states. Specialize in an area others avoid.

N/A Pattern Analysis

Read rejection reasons carefully. Was it scope? Impact? Intended behavior? Each N/A teaches you something about what counts as a valid vulnerability. Track your rejections: If you keep getting N/A for "missing headers," stop reporting missing headers without demonstrated exploit. Adjust your understanding based on feedback.

Continuous Improvement

Keep a rejection log. Note the reason for each duplicate/N/A. Look for patterns after 10-20 rejections. Are you consistently having scope issues? Impact problems? Testing the same bug types everyone else tests? Use this data to adjust your strategy systematically.

Frequently Asked Questions

Should I challenge a duplicate/N/A?

Only if you genuinely believe there's a misunderstanding. Provide additional evidence politely: "I'd like to clarify the impact..." Don't argue aggressively - it damages your reputation and wastes everyone's time. Most rejections are correct. Accept them gracefully and learn from the feedback.

What's a good acceptance rate?

Top hunters aim for 70%+ valid reports. If you're below 50%, you're submitting too many low-quality reports. Quality over quantity - fewer, better-validated reports are more effective for your reputation, your earnings, and your learning. One solid P2 is worth more than five N/A reports.

How do I check if a bug might be duplicate before submitting?

You can't definitively know, but you can estimate: Is this an obvious bug on a main endpoint? Was it found by a common scanner? Has the program been running for years? If yes to all, duplicate risk is high. Consider whether to invest time in a better writeup or move on. The more unique your testing approach, the lower your duplicate rate.

Is it worth hunting on very popular programs?

Yes, but with a different strategy. Popular programs pay well and have broad scope, but surface bugs are gone. Success requires deep testing: understanding their specific tech stack, finding bugs in business logic, testing edge cases others miss. If you're willing to invest the time to learn an application deeply, popular programs can be lucrative.

🎯 You Can Maximize Acceptance!

Strategic target selection, thorough pre-submission checks, and learning from feedback - you now have the tools to minimize wasted effort and maximize valid findings. Remember: signal matters more than volume.

Strategy Target Selection Quality

Ready to automate and scale your hunting

Knowledge Validation

Demonstrate your understanding to earn points and progress

1
Chapter Question

What status is given to a bug bounty report when another researcher already reported the same issue?

1
Read
2
Validate
3
Complete

Ready to track your progress?

Create a free account to save your progress, earn points, and access 170+ hands-on cybersecurity labs.

Start Learning Free
Join 5,000+ hackers learning cybersecurity with hands-on labs. Create Account