Stop hardcoding AWS keys. Here's the right way to upload files from your Android app using Pre-Signed URLs, coroutines, and retry logic.
The Problem
You're building an Android app. Users need to upload images, documents, or videos to your S3 bucket. The naive solution? Embed your AWS Access Key and Secret directly in the app.
That's a security disaster waiting to happen.
APKs can be decompiled. Hardcoded credentials get leaked. Once someone has your AWS keys, they have access to your entire bucket or worse, your entire AWS account.
There's a better, production-grade pattern. It uses your own backend as a gatekeeper to issue pre-signed URLs: temporary, scoped upload links that expire after a short window. The client uploads directly to S3 using that URL, and your backend stays in control throughout.
The Solution: Pre-Signed URL Flow
The architecture is a clean three-step handshake between three APIs.
Android App ──► Backend (Upload API) ──► generates pre-signed URLs
Android App ──► S3 (direct upload) ──► files land in bucket
Android App ──► Backend (Validate API) ──► confirms and records uploadsThis pattern gives you:
- Zero AWS credentials on the client
- Backend controls who can upload what
- S3 receives files directly, reducing backend load
- Pre-signed URLs expire (typically 5 to 15 minutes)
The Approach
API 1: Request Pre-Signed URLs from Your Backend
Before touching S3, your app asks your own backend for permission. For each file the user selected, you send the file's name and MIME type. The backend validates the request, generates a unique pre-signed S3 URL per file, and returns it.
Why send name and type? The backend can reject disallowed file types, enforce naming conventions, and ensure S3 receives the correct Content-Type during upload.
Pseudo-code:
POST /api/upload/presigned-urls
Body: [
{ fileName: "invoice.pdf", fileType: "application/pdf" },
{ fileName: "profile.jpg", fileType: "image/jpeg" }
]
Response: [
{ fileKey: "uploads/uuid-invoice.pdf", uploadUrl: "https://s3.amazonaws.com/...&Expires=600" },
{ fileKey: "uploads/uuid-profile.jpg", uploadUrl: "https://s3.amazonaws.com/...&Expires=600" }
]Save the fileKey values. You will need them in API 3.
API 2: Upload Files Directly to S3 (with Retry)
Using the pre-signed URLs from API 1, the app uploads each file directly to S3 via a PUT request. No AWS SDK needed. No auth headers. The pre-signed URL already embeds the credentials.
For multiple files, upload them concurrently using Kotlin coroutines. Sequential uploads are slow. With async/await, all files upload in parallel and you awaitAll before moving on.
Networks fail. S3 occasionally returns 5xx errors. Each upload should include retry logic with exponential backoff.
Pseudo-code:
suspend fun uploadAllFiles(files: List<Pair<Uri, PresignedUrlResponse>>) {
coroutineScope {
val jobs = files.map { (uri, presigned) ->
async {
uploadWithRetry(uri, presigned.uploadUrl, resolveType(uri), maxRetries = 3)
}
}
jobs.awaitAll()
}
}
suspend fun uploadWithRetry(uri, uploadUrl, fileType, maxRetries) {
repeat(maxRetries) { attempt ->
try {
val bytes = readBytesFromUri(uri) // via ContentResolver
PUT uploadUrl
Header: "Content-Type" = fileType
Body: bytes
if response is 200 OK → return
} catch (e: Exception) {
if (attempt == maxRetries - 1) throw e
delay(2^attempt * 1000ms) // 1s, 2s, 4s backoff
}
}
}Things to get right:
Content-Typeheader must match what the backend told S3. A mismatch causes403 Forbidden- Read files via
ContentResolverusing the fileUri, not a raw path - S3 returns no response body on success, just HTTP 200
API 3: Notify Your Backend of Successful Uploads
Once all files are in S3, tell your backend. It does not know the upload happened unless you tell it. This step lets the backend update its database, trigger downstream processing (thumbnails, virus scanning, etc.), and mark the transaction complete.
You send back the fileKey list from API 1.
Pseudo-code:
POST /api/upload/validate
Body: {
fileKeys: ["uploads/uuid-invoice.pdf", "uploads/uuid-profile.jpg"],
uploadedBy: "user_1234",
context: "invoice-submission"
}
Response: {
status: "success",
attachedFiles: 2
}Full Orchestration
suspend fun handleUserUpload(selectedFiles: List<Uri>) {
// API 1: Get pre-signed URLs
val metadata = selectedFiles.map { uri ->
FileUploadRequest(resolveFileName(uri), resolveFileType(uri))
}
val presignedUrls = backendApi.getPresignedUrls(metadata)
// API 2: Upload concurrently to S3 with retry
uploadAllFiles(selectedFiles.zip(presignedUrls))
// API 3: Validate with backend
backendApi.validateUploads(
fileKeys = presignedUrls.map { it.fileKey },
uploadedBy = currentUser.id,
context = currentUploadContext
)
}Error Handling Strategy
Failure Point Recovery Strategy API 1 fails Retry the whole flow. Nothing was uploaded yet API 2 partially fails Retry only failed files. Track per-file status API 2 fully fails Abort. Do not call API 3 API 3 fails Retry API 3 only. Files are already safe in S3
Security Checklist
- Authenticate before issuing URLs. Only authorised users should receive them
- Keep expiry short: 5 to 15 minutes
- Restrict allowed MIME types and file sizes at the backend
- In API 3, verify the
fileKeyactually exists in S3 before writing to your database - Scope IAM permissions to
s3:PutObjecton the specific bucket prefix only
Summary
The pre-signed URL pattern gives you the performance of direct-to-S3 uploads with the security of a backend gatekeeper. AWS credentials never leave your server. Users get fast parallel uploads. Your backend stays informed at every step.
The three-API flow: request, upload, validate. Clean, retryable, and production-ready.
Have a different approach you've used in production? Drop a comment below.