Introduction
This is a writeup of a vulnerability I discovered on Screenly — a digital signage platform used by businesses to manage content on physical displays. The findings were responsibly disclosed through their bug bounty program.
TL;DR: Screenly's public API (v4.1) internally routes all requests to a legacy backend (x3) that returns sensitive fields — including a plaintext PIN used for hardware screen pairing — that were deliberately omitted from the modern API surface.
Background — What is Screenly?
Screenly lets companies manage digital signage screens remotely. Think TVs in airports, retail stores, or corporate lobbies displaying dynamic content. Each account gets a team subdomain ({team}.screenlyapp.com) and a dashboard to manage screens, playlists, and assets.
The interesting thing from a security perspective: Screenly runs physical hardware. Raspberry Pi devices authenticate to your account using a pairing PIN. That PIN is the bridge between the web dashboard and the physical world.
Recon — It Started With a ULID
I was proxying traffic through Burp Suite while browsing the dashboard and noticed this in the network tab:
GET /api/v4.1/teams HTTP/2
Host: [REDACTED]-team.screenlyapp.comThe response contained an ID field:
"01XXXXXXXXXXXXXXXXXXXXXXXXXXXXX"26 characters, alphanumeric. Not a UUID. I recognized this immediately as a ULID — Universally Unique Lexicographically Sortable Identifier. Unlike UUIDs, ULIDs embed a millisecond-precision timestamp in their first 10 characters encoded in Crockford Base32.
I decoded the timestamp:
ALPHABET = "0123456789ABCDEFGHJKMNPQRSTVWXYZ"
ulid = "[REDACTED_ULID]"
ts_chars = ulid[:10]
ts = sum(ALPHABET.index(c) * (32 ** (9 - i)) for i, c in enumerate(ts_chars))
print(ts / 1000) # Unix timestampOutput: a future timestamp — interesting but not directly exploitable. What mattered was understanding the ID format for later enumeration attempts.
The Discovery — PostgREST Fingerprinting
Looking at another request I'd captured:
GET /api/v4.1/screens?select=id%2Ctype%2Cstatus%3Ascreen_statuses%28status%29&type=in.%28%22hardware%22%2C%22anywhere%22%29Decoded:
select=id,type,status:screen_statuses(status)
type=in.("hardware","anywhere")That select syntax with the colon notation and the in.(…) filter operator — that's PostgREST. PostgREST is an open-source tool that auto-generates a REST API directly from a PostgreSQL database schema. It exposes foreign key relationships, supports server-side filtering, and uses its own query operator syntax (eq., gt., in.() etc.).
This is significant because PostgREST APIs are notorious for:
- Returning more columns than intended if select=* isn't blocked
- Leaking database schema through verbose error messages
- Lacking field-level access control if not explicitly configured
I immediately tried the schema dump endpoint:
GET /api/v4.1/No luck — it wasn't exposed. But I kept the PostgREST fingerprint in mind.
The Smoking Gun — Content-Location Header
I sent a broad request to the users endpoint:
GET /api/v4.1/users/ HTTP/2
Host: [REDACTED]-team.screenlyapp.com
Cookie: beaker.session.id=[REDACTED]The response came back 200 OK with my user data. Normal. But then I looked at the response headers:
Content-Location: /api/x3//usersx3. Not v4.1. The server was revealing that internally, my v4.1 request was being handled by a completely different backend version — x3. And the double slash (x3//users) indicated sloppy path concatenation, suggesting this routing wasn't intentional.
I tried hitting x3 directly:
GET /api/x3/users HTTP/2Blocked. But the traversal worked in some cases:
GET /api/v4.1/teams//x3/users HTTP/2The server was inconsistently normalizing paths, allowing internal version routing to be forced via double-slash injection in the version prefix.
The Finding — Plaintext PIN in API Response
Here's where it gets real. The x3 backend was returning fields that v4.1 deliberately omits. Specifically, when I hit:
GET /api/v4.1/users/[REDACTED_USER_ID] HTTP/2
Host: [REDACTED]-team.screenlyapp.com
Cookie: beaker.session.id=[REDACTED]Response:
[{
"id": "[REDACTED_USER_ID]",
"auth_server_id": [REDACTED],
"pin": "[REDACTED]",
"email": "[REDACTED]@[REDACTED].com",
"first_name": "[REDACTED]",
"last_name": "[REDACTED]"
}]The pin field. In plaintext. In a standard authenticated API response.
This PIN is the hardware screen pairing credential. It's what a Raspberry Pi uses to authenticate itself to your Screenly account. It's functionally a password for your physical infrastructure.
The developers knew this field was sensitive — it is completely absent from the intended v4.1 API responses. The x3 routing bypass was silently undermining that design decision.
Why HTTPS Doesn't Save You Here
The obvious question: "But it's over HTTPS, so it's encrypted in transit — why does it matter?"
HTTPS encrypts the channel between client and server. It does nothing about what happens after the data arrives:
1. Application Logging Every response can be captured by server-side logging infrastructure. The plaintext PIN now lives in access logs, potentially indefinitely.
2. Third-Party Monitoring (Sentry) Every single request in my Burp capture included Sentry trace headers:
Sentry-Trace: [REDACTED]
Baggage: sentry-environment=production,sentry-public_key=[REDACTED],...Sentry is an error monitoring tool that can capture full request/response payloads. If it's logging API responses, PINs are sitting in a third-party cloud service.
3. Design Intent Violation The developers explicitly built v4.1 to not return this field. That decision exists for a reason. The routing bug bypasses it entirely.
IDOR Testing — Full Cross-Account Attempt
I created a second test account to test for cross-account data access. Here's the full test matrix I ran:
Test 1 — Filter by victim's auth_server_id:
GET /api/v4.1/users?auth_server_id=eq.[REDACTED] HTTP/2
Cookie: beaker.session.id=[ATTACKER_SESSION_REDACTED]Result: [] — RLS blocked it. PostgreSQL Row Level Security scoped the query to the authenticated session.
Test 2 — Direct object access by ULID:
GET /api/v4.1/users/[REDACTED_VICTIM_ID] HTTP/2
Cookie: beaker.session.id=[ATTACKER_SESSION_REDACTED]Result: [] — same RLS enforcement.
Test 3 — Team ID filter:
GET /api/v4.1/teams?id=eq.[REDACTED_VICTIM_TEAM_ID]
Cookie: beaker.session.id=[ATTACKER_SESSION_REDACTED]Result: Returned attacker's own team data — the ID parameter was ignored, session trumped the filter.
Conclusion: IDOR was not confirmed. The data layer's RLS is correctly scoped. The vulnerability is at the presentation layer — field exposure within your own authenticated context, not cross-account.
Impact Assessment
Without IDOR, here's what the finding actually enables:
Immediate:
- Any attacker who compromises a user's session (via XSS, session fixation, credential stuffing) automatically obtains the hardware PIN without needing any additional privilege escalation step
- The PIN is handed out in the API response rather than requiring a separate privileged action
Systemic:
- auth_server_id is a sequential integer. If RLS were ever misconfigured on a single endpoint, full PIN enumeration across all users becomes trivial
- The PIN is logged in plaintext by third-party services already integrated in the stack
Architectural:
- The v4.1 → x3 routing misconfiguration affects every endpoint on the platform, not just /users. The PIN was the worst exposed field but the underlying architecture issue is broader
Additional Finding — Admin UUIDs via Public Endpoint
While testing, I found a separate issue on the /edge-apps endpoint:
GET /api/v4.1/edge-apps?is_public=eq.true&select=name,team_id,created_by HTTP/2No authentication required. Response:
[{
"name": "Clock",
"team_id": "[REDACTED_ADMIN_TEAM_ID]",
"created_by": "[REDACTED]"
}]The team_id belongs to Screenly's own internal admin team — confirmed by the asset names. The select= parameter was not column-whitelisted, allowing callers to request internal identifier fields on public resources without authentication.
This is a standalone finding — no auth, no traversal, just an unrestricted PostgREST select parameter on a public endpoint returning admin-level identifiers.
Timeline
Date
Event
April 22, 2026
Initial discovery — ULID in network tab
April 22, 2026
PIN exposure confirmed, Report 1 sent
April 23, 2026
Admin UUID + schema disclosure found, Report 2 sent
April 23, 2026
Full IDOR test matrix completed — not confirmed
Pending
Awaiting triage response / fix confirmation
Key Takeaways
For hunters:
- PostgREST fingerprinting is underrated. The query syntax is distinctive and the attack surface is well-documented. If you see select=, eq., in.() in API params — you're looking at PostgREST.
- Always check Content-Location and Content-Type response headers. Internal routing destinations leak from there.
- ULIDs embed timestamps. Decode them. Future timestamps, creation time inference, and sequential patterns all have recon value.
- RLS in PostgreSQL is solid when configured correctly. Don't assume IDOR just because IDs are sequential — test it and document the result either way.
For developers:
- Never expose raw database error messages to API consumers
- Whitelist select= columns explicitly in PostgREST using exposed columns configuration
- Strip internal routing headers (Content-Location) before responses reach clients
- Treat the PIN (or any hardware credential) like a password — never return it after initial provisioning
Responsible Disclosure
All findings were reported to Screenly's security team at security-advisory@screenly.io with full technical documentation, CVSS vectors, and reproduction steps. Testing was conducted exclusively on my own accounts. No other users' data was accessed. This writeup was published after coordinated disclosure with the vendor.
Written by Ansh Bohra (xenos) — bug bounty hunter and cybersecurity student. Found something interesting? Connect with me.