The "Duplicate/Informative" Trap: A Bug Hunter's Guide to Open Strapi Registration & Leaky GCS Buckets

None

We've all been there. You spend hours mapping out a target's infrastructure, chaining misconfigurations, and crafting a flawless report, only to get hit with the dreaded "Duplicate" or "Informative" status on HackerOne or Bugcrowd.

Recently, I submitted a report detailing a dual-threat misconfiguration on a production application: an exposed Strapi CMS and a leaky Google Cloud Storage (GCS) bucket. Even though the report was closed, the technical takeaways regarding supply chain reconnaissance and bug bounty reporting strategies are too good to keep to myself.

Here is a deep dive into the technical details, the exact Proof of Concepts (PoCs), and what the triage response actually taught me about writing better reports.

The Vulnerabilities: When "Intentional" Becomes a Threat

The underlying issue I discovered involved two separate, yet compounding, misconfigurations in the organization's content and distribution infrastructure.

Vulnerability A: Strapi CMS Open Registration

Developers often forget to disable default authentication routes when pushing headless CMS frameworks to production. In this instance, the target's Strapi Enterprise backend allowed unrestricted public self-registration. There was no email verification, no CAPTCHA, and no rate limiting.

The PoC: A simple POST request yields a confirmed account and a valid 30-day JWT.

Bash

curl -sk -X POST \
  "https://victorious-[redacted].strapiapp.com/api/auth/local/register" \
  -H "Content-Type: application/json" \
  -d '{"username":"attacker1","email":"attacker@evil.com","password":"Attack123!"}'

The Response:

JSON

{
  "jwt": "eyJhbGciOiJIUzI1NiIsInR5cCI6Ik...",
  "user": {
    "id": 24,
    "username": "attacker1",
    "email": "attacker@evil.com",
    "confirmed": true,
    "blocked": false
  }
}

Beyond open registration, unauthenticated administrative endpoints like /admin/init and /admin/project-type were openly accessible. These leaked the instance UUID, their Enterprise Edition license tier, and AI credit limits.

Vulnerability B: GCS Bucket Public Object Listing

The GCS bucket backing the target's CLI distribution endpoint had the allUsers group granted the storage.objects.list IAM permission. Any unauthenticated user could enumerate the entire bucket.

The PoC:

Bash

curl -sk "https://cli.[redacted].ai/?max-keys=1000"

The Impact: This returned a full XML <ListBucketResult> dumping over 80 objects. Crucially, it exposed internal nightly build artifacts and metadata.json files that leaked exact internal git commit hashes and branch names. This is a goldmine for supply chain reconnaissance, allowing an attacker to map internal development flows and analyze unreleased builds for vulnerabilities.

The Hunter's Methodology: Finding the Flaws

Finding these types of infrastructure vulnerabilities isn't about throwing massive wordlists at a target; it relies on smart, targeted reconnaissance.

1. Familiarize yourself with API Documentation Don't just run a generic directory brute-forcer looking for /admin panels. When you identify a specific technology like Strapi, go straight to its developer documentation. Knowing that Strapi uses /api/auth/local/register by default allows you to test for registration flaws instantly.

2. Passive Recon via CSP Headers is King I found the internal infrastructure mapping by closely reading the HTTP response headers. Whether you are proxying your traffic through Burp Suite or Caido, always check the Content-Security-Policy (CSP) headers. In this case, the CSP leaked internal AWS S3 staging buckets, DigitalOcean Spaces URLs, and media CDN endpoints.

3. Automate Cloud Storage Checks Bucket listing vulnerabilities are prevalent. You can write a quick Bash script to append /?max-keys=10 to any subdomains you suspect are backed by cloud storage. If the server responds with XML instead of an HTTP 403 Forbidden or 405 Method Not Allowed, you have a listing vulnerability.

Decoding the Triage Response

My report was ultimately closed as a Duplicate, and the original report it duplicated was marked as Informative. What does this actually mean, and what is the lesson here?

"The original report was evaluated and closed as Informative, with the team confirming that public LIST and READ access… is intentional behavior required to support legitimate CLI downloads and version rollbacks."

  • The "Informative" Reality: The triage team stated that the GCS public listing was by design. In the bug bounty world, functionality often trumps strict security posturing. Even though the bucket leaks internal commit hashes and enables supply chain reconnaissance, if the engineering team considers it "working as intended" for rollbacks, it won't be patched or rewarded.
  • The "Duplicate" Trap: Because I combined the GCS bucket issue and the Strapi CMS issue into one cohesive "Supply Chain Infrastructure" report, the analyst closed the entire submission as a duplicate of the known GCS issue.

However, the analyst gave me a massive hint in their sign-off:

"If you believe the Strapi CMS component represents a genuine security concern distinct from intentional functionality, you may consider submitting it as a separate, standalone report…"

The Ultimate Takeaway

Keep your reports atomic. If you find two separate infrastructure flaws, file two separate reports. Do not chain them unless the exploitation of one strictly relies on the other. Because I bundled these findings, the "Informative" GCS issue dragged a highly valid, potentially critical Strapi misconfiguration down into the duplicate graveyard.

Reporting is just as much of a skill as the hacking itself. Keep it separated, keep it clear, and always read your CSP headers!