Have you ever tried uploading a large file to AWS S3 and wanted to track the upload progress reliably? Let me share my experience and what I learned.
The Challenge: Showing Progress to Users
When building file upload functionality, a key requirement is a loader that accurately shows the progress of the upload.
For small files, the solution seems straightforward: Axios provides onUploadProgress, which gives a ProgressEvent containing:
loaded– the number of bytes uploaded so fartotal– the total number of bytes to upload
export async function uploadToS3(
uploadUrl: string,
file: File,
onProgress: (p: number) => void,
signal?: AbortSignal,
) {
await axios.put(uploadUrl, file, {
headers: { "Content-Type": file.type },
signal,
onUploadProgress: (progressEvent) => {
const percent = Math.round(
(progressEvent.loaded * 100) / progressEvent.total!,
);
onProgress(percent);
},
});
}From these, you can calculate the percentage uploaded and display it to the user.
✅ This works great for small files.
When Small File Solutions Fail
The problem appeared when we started uploading larger files. Suddenly:
- The progress either lags behind
- Or jumps to 100% all at once at the end
Why? Because the browser sends the file in one giant request. The onUploadProgress event becomes unreliable for large files.
The Solution: Multipart Upload (Chunking)
For large files, the answer is Multipart Upload:
- Split the file into chunks on the frontend
- Upload each chunk separately
- Track progress per chunk
Example:
- Video size: 500MB
- Chunk size: 20MB
- Total requests: 25
Some may worry about too many requests. Don't worry: these requests are short-lived, limited in size, and uploaded with controlled concurrency.
Benefits:
- Progress is predictable
- Individual chunks can retry on failure
- Often faster and more stable than a single large upload
Choosing the Right Chunk Size
AWS S3 requires a minimum chunk size of 5MB for multipart uploads.
We define:
minChunkSize= 5MBdesiredChunkSize= 20MB
We also check the file size to decide:
- Threshold = 20MB
- Files ≤ 20MB → Upload normally (no multipart)
- Files > 20MB → Split into chunks and upload with multipart
const MIN_CHUNK_SIZE = 5 * 1024 * 1024
const DESIRED_CHUNK_SIZE = 20 * 1024 * 1024
const fileSize = data.file.sizeWhy 20MB? There's no fixed number where uploads "break." Other factors affect reliability, like network speed or connection stability.
Backend Changes for Multipart Upload
Previously, the backend returned a single S3 URL:
- You'd send the file there, done.
Now, the backend returns:
- Pre-signed URLs — an array, each tied to a chunk's
partNumber - uploadId
- key
{
"uploadId": "UPLOAD_ID",
"key": "/FILE_NAME",
"parts": [
{
"partNumber": 1,
"uploadUrl": "https://my-bucket.s3.amazonaws.com/KEY_PLACEHOLDER"
}
],
"expiresAt": "EXPIRATION_DATE_PLACEHOLDER"
}Every chunk is uploaded to its corresponding URL. After each chunk is uploaded, S3 returns an ETag, a fingerprint proving that the chunk arrived intact.
ETag + partNumber are required for
CompleteMultipartUpload, which merges all chunks into a single file on S3. Keeping track of the ETags ensures the correct order and integrity of the upload.
Progress Tracking: Small vs Large Files
- Small files: browser events (
onUploadProgress) handle progress at the byte level - Large files: progress is based on the number of uploaded chunks relative to total chunks
const MIN_CHUNK_SIZE = 5 * 1024 * 1024;
const DESIRED_CHUNK_SIZE = 20 * 1024 * 1024;
const fileSize = data.file.size;
if (fileSize <= DESIRED_CHUNK_SIZE) {
const { uploadUrl, fileUrl } = await generateUploadUrl(
data.file.name,
"file",
data.file.type
);
await uploadToS3(
uploadUrl,
data.file,
options?.onProgress ?? (() => {}),
options?.signal
);
fUrl = fileUrl;
} else {
const maxChunks = Math.floor(fileSize / MIN_CHUNK_SIZE) || 1;
const chunkSize = Math.min(
DESIRED_CHUNK_SIZE,
Math.ceil(fileSize / maxChunks)
);
const partCount = Math.ceil(fileSize / chunkSize);
const { uploadId, parts, key } = await generateMultipartUploadUrls(
data.file.name,
data.file.type,
partCount
);
const uploadedParts = await uploadChunksToS3(
parts,
data.file,
options?.onProgress ?? (() => {}),
options?.signal,
chunkSize
);
fUrl = await completeMultipartUpload(uploadId, key, uploadedParts);
}Benefits of chunk-based tracking:
- Progress is stable
- Better UX
- No sudden jumps or delays
- Individual chunk failures can retry without restarting the whole file upload
Final Step: Confirmation
Once all chunks are uploaded:
- Send a final request to the backend with:
uploadIdkeyETags- Backend calls
CompleteMultipartUpload→ file is assembled in S3 as a single file.
This approach makes uploading large files predictable, stable, and user-friendly.
Conclusion
Switching to chunked uploads with progress tracking dramatically improves the experience for large file uploads. It's a small change in architecture but makes a big difference in reliability and UX.