NeuroStrike

What a Breach Simulation Report Looks Like

NeuroStrike Research

Security Research Team

|5 min read
What a Real Breach Simulation Report Looks Like (With Examples)

Most organizations have never seen a real penetration test report. They've seen vulnerability scan outputs — long PDFs listing CVEs sorted by CVSS score, padded with informational findings that nobody acts on. A breach simulation report is fundamentally different. It tells a story: here's how an attacker gets in, what they can access, and what it costs you.

We're sharing the structure and anonymized examples from actual NeuroStrike breach simulation reports so you know what to expect — and what to demand — from your security testing.

Section 1: Executive Summary

One page. Non-technical. Answers three questions:

  1. Can an external attacker gain unauthorized access? (Yes/No, with the specific path)
  2. What data or systems are at risk? (Customer PII, financial records, admin access)
  3. What's the business impact? (Regulatory exposure, financial loss, reputational damage)

Example from a recent engagement:

An external attacker can achieve full administrative access to the application within 4 steps, starting from an unauthenticated position. This grants access to 12,400 customer records including names, email addresses, and payment history. The attack chain exploits a broken password reset flow to take over any user account, followed by a privilege escalation through a mass assignment vulnerability in the profile update API.

Find what scanners miss

NeuroStrike runs autonomous breach simulations that go beyond checkbox security testing.

Start Free

Section 2: Attack Chain Narratives

This is what separates a breach simulation from a vulnerability scan. Each finding isn't isolated — it's presented as a step in an attack chain with clear cause and effect.

Example Attack Chain

Step 1 — Information Disclosure: The /api/v1/users endpoint returns email addresses for all users without authentication. The attacker enumerates 340 email addresses in 2 minutes.

Step 2 — Account Takeover: The password reset endpoint accepts an email and sends a reset link. The reset token is a predictable 6-character alphanumeric string (2.1 billion combinations). With no rate limiting, the attacker brute-forces a valid token in approximately 8 hours.

Step 3 — Privilege Escalation: After resetting a regular user's password and logging in, the attacker sends a PUT request to /api/v1/users/me with {"role": "admin"} in the body. The API accepts it and the user is now an administrator.

Step 4 — Data Exfiltration: As an admin, the attacker accesses /api/v1/admin/export which returns a CSV of all customer data including payment information.

Section 3: Finding Detail

Each finding includes:

  • Severity: Critical / High / Medium / Low / Informational
  • CVSS Score: Calculated per CVSS 3.1
  • Category: OWASP Top 10 mapping
  • Description: What the vulnerability is
  • Evidence: HTTP requests and responses proving exploitation
  • Impact: What an attacker gains
  • Remediation: Specific code or configuration changes
  • References: CWE, OWASP, relevant advisories

Evidence Format

Every finding includes the actual HTTP request and response that demonstrates exploitation. No theoretical risks — only proven exploits.

# Evidence: BOLA in invoice endpoint
# Request
curl -H "Authorization: Bearer user-a-token" \
     https://app.example.com/api/v1/invoices/INV-2024-0099

# Response (200 OK — this invoice belongs to User B)
{
  "id": "INV-2024-0099",
  "customer": "Other Company Inc",
  "amount": 28750,
  "status": "paid"
}

Find what scanners miss

NeuroStrike runs autonomous breach simulations that go beyond checkbox security testing.

Start Free

Section 4: Risk Matrix

A visual summary showing findings plotted by severity and likelihood. Critical findings with high likelihood are top-left (fix immediately). Informational findings with low likelihood are bottom-right (accept or defer).

We also include a "time to exploit" metric for each finding. If a vulnerability can be exploited in under 5 minutes by an automated tool, that's a different urgency than one requiring days of manual effort.

Section 5: Remediation Roadmap

Findings grouped into three tiers:

  1. Immediate (0-7 days): Critical and high-severity findings with active exploitation risk
  2. Short-term (1-4 weeks): Medium-severity findings and hardening recommendations
  3. Ongoing: Security architecture improvements, monitoring recommendations, and policy changes

Each remediation includes estimated effort (hours), required skill level, and whether it can be automated.

What a Vulnerability Scan Report Looks Like (For Comparison)

A typical DAST scan report for the same application would list:

  • Missing X-Frame-Options header (Medium)
  • Cookie without Secure flag (Low)
  • Server version disclosure (Informational)
  • 2 reflected XSS findings (High)

It would miss the account takeover chain, the mass assignment, the BOLA, and the data export — the findings that actually matter for business risk. The scan report has 4 findings. The breach simulation report has 14, including 3 critical attack chains.

A vulnerability scan tells you what's misconfigured. A breach simulation tells you what's exploitable. When you're deciding where to invest limited security engineering time, exploitability is what matters.

Find what scanners miss

NeuroStrike runs autonomous breach simulations that go beyond checkbox security testing.

Start Free

Related Posts

What a Breach Simulation Report Looks Like | NeuroStrike | NeuroStrike