I'm Sagar, Identity and Security Engineer at Seagate by day, bug bounty hunter by obsession. I spend my days wrestling with CyberArk, Multi-Cloud IAM & Security, and AI security. I spend my nights breaking web apps, weaponizing CI/CD misconfigurations, and abusing dependency ecosystems.
In this article, I'll show you how a simple SSRF turned into admin access and mass PII disclosure.
The Beginning
Some of the most devastating vulnerabilities I've encountered didn't start with complex exploit chains or zero-day research. They started with something almost embarrassingly simple.
In this case, it started with a profile image upload.
I was deep into a routine bug bounty session, exploring a platform that required users to sign up and complete a standard onboarding flow. If you've tested enough applications, you know this dance by heart: sign up with an email, verify using an OTP that lands in your inbox, fill out some profile details, and finally upload a profile picture to personalize your account.
Nothing about this seemed unusual on the surface. The UI was polished, the flow was intuitive, and everything appeared to be working exactly as intended.
But here's the thing about onboarding flows that many developers overlook they're integration-heavy by nature. They touch authentication systems, file storage services, notification pipelines, and often external APIs. That complexity makes them interesting to security researchers. Very interesting.

The First Observation
While uploading a profile image, I had Burp Suite running in the background, intercepting traffic as I always do. Most of the requests were standard form submissions, API calls for email verification, the usual suspects.
Then I saw something that made me pause.
The request body for the image upload didn't contain a base64-encoded file or multipart form data as I expected. Instead, it contained a URL pointing to the uploaded image.
Let that sink in for a moment.
Instead of directly accepting a file from the user's device, the application was accepting a URL and presumably fetching the image from that location server-side. This architectural choice immediately opened up possibilities that a traditional file upload wouldn't have.
My mind started racing with questions. What if the backend was blindly fetching whatever URL I provided? What if there were no restrictions on where those requests could go? And most importantly what if I could make the server talk to places it shouldn't?
To test this theory, I replaced the image URL with a Burp Collaborator endpoint I had generated specifically for this test. It's a simple technique, but sometimes the simple ones tell you everything you need to know.
I forwarded the request and waited.
Unexpected Callbacks
Within seconds literally seconds my Collaborator dashboard started lighting up with activity.
But it wasn't just a single request. That would have been too easy, too clean. Instead, I watched as multiple requests hit my endpoint from different IP addresses, each with distinct user agents and timing patterns.
This was the confirmation I needed, and it was telling a more complex story than I initially anticipated.
The application backend was unquestionably making server-side requests to user-controlled URLs. This behavior is the textbook definition of a potential Server-Side Request Forgery (SSRF) vulnerability, and in the right conditions, SSRF can be devastating.
At this point, I could have stopped. I had found a valid security issue that would likely qualify for a bounty. The responsible disclosure process could have begun right then.
But something about those multiple callbacks bothered me. Different IPs meant different systems were processing my payload. Different systems meant a larger attack surface. And a larger attack surface meant there might be more to uncover.
I decided to go deeper.
Following the Trail
The next phase of testing was methodical and time-consuming.
I started analyzing every incoming connection to my Collaborator endpoint, logging the source IPs, the timing of requests, the headers being sent, and any patterns in how the infrastructure was interacting with my payload.
Some requests came from what appeared to be application servers. Others seemed to originate from caching layers or CDN edge nodes. But one cluster of requests stood out immediately.
These connections were coming from an IP range that didn't match the public-facing infrastructure. The user agents suggested internal services. Instead of stopping there, I started checking every HTTP hit IP for open ports , methodical, tedious, and exactly the kind of work many don't like, but the kind that separates surface-level findings from critical ones. On one random IP, on one random port, I hit an admin panel that was never supposed to be exposed. An administrative interface completely exposed on a very random port, sitting there without any apparent protection.
I could have stopped at textbook SSRF file the report, collect the bounty, move on. But that's not how this kind of finding works. You don't stop when the vulnerability confirms itself, you stop when you've mapped the blast radius. So I kept digging, following internal services through user-controlled URLs until I mapped the real damage.
The Admin Panel
Finding an admin panel isn't automatically a vulnerability. Sometimes they're properly secured with strong authentication, IP restrictions, and monitoring. I've found plenty that were effectively impenetrable despite being technically exposed.
This was not one of those cases.
The interface loaded without asking for credentials. No login prompt, no MFA challenge, no "unauthorized" response. Just a fully functional administrative dashboard with access to internal tools and user management features.
My immediate reaction wasn't excitement it was concern. Admin panels without authentication are rarely isolated oversights. They're usually symptoms of deeper architectural problems, and this one was no exception.
As I explored the available features, one tool immediately stood out from the rest.
Test Push Notifications
The interface was straightforward: a text field labeled "User ID" and a submit button. The description indicated it was for testing push notification delivery to specific users a common debugging tool for development teams.
Out of pure curiosity, I entered a user identifier I had observed during my earlier testing. I wasn't expecting much. Maybe a confirmation message, perhaps an error if the ID format was wrong.
What came back was far more than I bargained for.

The Data Exposure
The API response didn't just confirm that the push notification was queued or delivered. Instead, it returned a comprehensive data structure containing detailed information about the user I had queried.
And when I say comprehensive, I mean everything:
- Email addresses (primary and secondary)
- Usernames and display names
- Linked social media accounts with OAuth tokens
- Device connections and push notification tokens
- Wallet-related information and transaction history
- Internal metadata including account creation timestamps, security flags, and activity logs
This wasn't limited data for debugging purposes. This was complete user profiling information, the kind of data that privacy regulations exist to protect.
At this moment, the severity of this finding crystallized. What had started as a curiosity about an image upload had escalated into a critical PII disclosure vulnerability affecting what appeared to be the entire user base of the platform.
I sat back from my screen, realizing I had stumbled into something that could have devastating consequences if discovered by malicious actors.
Understanding the Impact
To properly communicate the severity of this issue in my report, I needed to understand exactly how these vulnerabilities connected and what an attacker could realistically accomplish.
The attack chain wasn't a single vulnerability it was a cascade of security failures, each one enabling the next:
1. Server-Side Interaction with User Input The application trusted user-controlled URLs without adequate validation or restriction. This allowed external resource fetching to be weaponized for internal reconnaissance.
2. Internal Infrastructure Exposure The backend systems responsible for processing these URLs revealed their existence through outbound requests, effectively mapping the internal network architecture for anyone paying attention.
3. Unprotected Administrative Interface The admin panel was accessible without authentication, network segmentation, or apparent access controls. It was treated as "hidden" rather than "secured" security through obscurity at its most dangerous.
4. Sensitive Data Disclosure Critical user information was exposed through internal features that were never meant to face the public internet, creating a direct path from anonymous access to comprehensive data exfiltration.
Combined, this created a serious and exploitable security risk that went far beyond any individual component.
An attacker with malicious intent could:
- Harvest sensitive personal data from any user on the platform
- Map internal infrastructure for additional attack vectors
- Abuse admin functionality to modify accounts or disrupt services
- Launch highly targeted attacks using the detailed profiling information exposed
The privacy implications were staggering. The regulatory implications, depending on jurisdiction, could be catastrophic for the organization.
The security team responded within hours and had the issue remediated the same day.

Why This Matters
Findings like this are particularly dangerous because they illustrate how security is only as strong as the weakest link in a complex chain.
Each individual issue might have been dismissed in isolation during development or security reviews:
- "It's just an image fetch, what's the worst that could happen?"
- "Those internal IPs aren't publicly listed, so they're safe"
- "No one will find the admin panel if we don't link to it"
- "The data is for internal debugging, it's not a real exposure"
These rationalizations are understandable when teams are under pressure to ship features. But together, they formed a critical vulnerability that could compromise the entire platform.
This is the reality of modern application security: small oversights compound exponentially.
Lessons Learned
After completing the disclosure process and seeing the issues remediated, I spent time reflecting on what this finding taught me about both offensive and defensive security.
Never Trust User-Controlled URLs
Any feature that fetches external resources based on user input must implement strict validation and sandboxing. Whitelist allowed protocols and domains. Disable or carefully validate redirects. Consider using separate, restricted infrastructure for outbound requests. The cost of getting this wrong is simply too high.
Protect Internal Interfaces
Administrative panels should never rely on obscurity for protection. They require defense in depth: strong authentication with MFA, IP allowlisting where feasible, network segmentation from public-facing applications, and comprehensive logging and monitoring. "Hidden" is not a security control.
Think in Chains
Security assessments that evaluate vulnerabilities in isolation miss the bigger picture. The real impact often comes from chaining multiple seemingly minor issues together. When reviewing code or architecture, always ask: "What could an attacker do if they combined this finding with two or three other issues?"
Monitor Backend Behavior
Those unexpected outbound requests from my Collaborator endpoint were the canary in the coal mine. Organizations should log and analyze server-side request patterns. Anomalies requests to unexpected domains, unusual traffic volumes, requests originating from unexpected services often indicate active exploitation attempts before they escalate.
Final Thoughts
This finding started with something as routine as uploading a profile picture. It ended with unauthenticated access to sensitive user information through an exposed administrative interface.
That's the nature of bug bounty hunting, and honestly, it's what keeps me engaged in this work year after year.
You follow small clues that others might overlook. You question assumptions about how systems are supposed to work. And sometimes just sometimes a simple request leads you much deeper than anyone expected, revealing vulnerabilities that could have remained hidden and exploitable for months or years.
The best security researchers don't just find individual bugs. They find connections between components, patterns in architecture, and chains of exploitation that turn minor issues into major findings.
Stay curious. Stay skeptical. And never underestimate where a single intercepted request might lead.
Found this writeup helpful? I'm always open to discussing bug bounty methodologies, security research techniques, or just trading stories about the weirdest vulnerabilities we've encountered. Feel free to reach out.
Connect with me: linkedin.com/in/sagar-dhoot
#bug-bounty #cybersecurity #infosec #ssrf #ethical-hacking