Skip to content

AWS S3 Storage

Overview

The AWS S3 integration provides secure, scalable cloud storage for digital assets. It supports direct uploads, multipart uploads for large files, presigned URLs for secure access, and automatic thumbnail generation. This document covers the complete S3 setup, configuration, and implementation.

Architecture

Components

  1. S3 Bucket: Primary storage for all digital assets
  2. S3 API Service: Express.js endpoints for S3 operations
  3. Presigned URLs: Temporary secure access to files
  4. Multipart Upload: Chunked uploads for large files
  5. Thumbnail Service: Automatic thumbnail generation

S3 Configuration

Bucket Setup

Bucket Structure:

{workspace_id}/
  digital_assets/
    {asset_id}.{extension}
  thumbnails/
    {asset_id}.jpg
  versions/
    {asset_id}/
      v1.{extension}
      v2.{extension}

CORS Configuration:

json
{
  "CORSRules": [
    {
      "AllowedOrigins": ["*"],
      "AllowedMethods": ["GET", "POST", "PUT", "DELETE", "HEAD"],
      "AllowedHeaders": ["*"],
      "ExposeHeaders": ["ETag", "x-amz-server-side-encryption"],
      "MaxAgeSeconds": 3000
    }
  ]
}

Bucket Policy:

json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowWorkspaceAccess",
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::ACCOUNT_ID:user/S3_USER"
      },
      "Action": [
        "s3:GetObject",
        "s3:PutObject",
        "s3:DeleteObject"
      ],
      "Resource": "arn:aws:s3:::BUCKET_NAME/*"
    }
  ]
}

Backend Implementation

S3 Service

javascript
// services/s3.js
const { S3Client, PutObjectCommand, GetObjectCommand, DeleteObjectCommand, CreateMultipartUploadCommand, UploadPartCommand, CompleteMultipartUploadCommand, AbortMultipartUploadCommand, ListPartsCommand } = require('@aws-sdk/client-s3')
const { getSignedUrl } = require('@aws-sdk/s3-request-presigner')

class S3Service {
  constructor() {
    this.client = new S3Client({
      region: process.env.AWS_DEFAULT_REGION,
      credentials: {
        accessKeyId: process.env.AWS_ACCESS_KEY_ID,
        secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY
      }
    })
    this.bucket = process.env.AWS_BUCKET
  }

  getKey(workspaceId, type, assetId, extension = '') {
    return `${workspaceId}/${type}/${assetId}${extension ? '.' + extension : ''}`
  }

  async startMultipartUpload(workspaceId, assetId, fileType) {
    const key = this.getKey(workspaceId, 'digital_assets', assetId, this.getExtension(fileType))
    
    const command = new CreateMultipartUploadCommand({
      Bucket: this.bucket,
      Key: key,
      ContentType: fileType,
      Metadata: {
        workspace_id: String(workspaceId),
        asset_id: String(assetId)
      }
    })

    const response = await this.client.send(command)
    return {
      uploadId: response.UploadId,
      key: key
    }
  }

  async getUploadPartUrl(workspaceId, assetId, uploadId, partNumber) {
    const key = this.getKey(workspaceId, 'digital_assets', assetId)
    
    const command = new UploadPartCommand({
      Bucket: this.bucket,
      Key: key,
      UploadId: uploadId,
      PartNumber: partNumber
    })

    const url = await getSignedUrl(this.client, command, { expiresIn: 3600 })
    return url
  }

  async completeMultipartUpload(workspaceId, assetId, uploadId, parts) {
    const key = this.getKey(workspaceId, 'digital_assets', assetId)
    
    const command = new CompleteMultipartUploadCommand({
      Bucket: this.bucket,
      Key: key,
      UploadId: uploadId,
      MultipartUpload: {
        Parts: parts.map(part => ({
          ETag: part.ETag,
          PartNumber: part.PartNumber
        }))
      }
    })

    const response = await this.client.send(command)
    return {
      location: response.Location,
      key: key,
      etag: response.ETag
    }
  }

  async abortMultipartUpload(workspaceId, assetId, uploadId) {
    const key = this.getKey(workspaceId, 'digital_assets', assetId)
    
    const command = new AbortMultipartUploadCommand({
      Bucket: this.bucket,
      Key: key,
      UploadId: uploadId
    })

    await this.client.send(command)
  }

  async getPresignedDownloadUrl(workspaceId, assetId, expiresIn = 3600) {
    const key = this.getKey(workspaceId, 'digital_assets', assetId)
    
    const command = new GetObjectCommand({
      Bucket: this.bucket,
      Key: key
    })

    const url = await getSignedUrl(this.client, command, { expiresIn })
    return url
  }

  async uploadThumbnail(workspaceId, assetId, thumbnailBuffer) {
    const key = this.getKey(workspaceId, 'thumbnails', assetId, 'jpg')
    
    const command = new PutObjectCommand({
      Bucket: this.bucket,
      Key: key,
      Body: thumbnailBuffer,
      ContentType: 'image/jpeg',
      CacheControl: 'max-age=31536000'
    })

    await this.client.send(command)
    return key
  }

  async deleteAsset(workspaceId, assetId) {
    const assetKey = this.getKey(workspaceId, 'digital_assets', assetId)
    const thumbnailKey = this.getKey(workspaceId, 'thumbnails', assetId, 'jpg')
    
    await Promise.all([
      this.client.send(new DeleteObjectCommand({
        Bucket: this.bucket,
        Key: assetKey
      })),
      this.client.send(new DeleteObjectCommand({
        Bucket: this.bucket,
        Key: thumbnailKey
      }))
    ])
  }

  getExtension(mimeType) {
    const extensions = {
      'image/jpeg': 'jpg',
      'image/jpg': 'jpg',
      'image/png': 'png',
      'image/gif': 'gif',
      'image/webp': 'webp',
      'video/mp4': 'mp4',
      'video/quicktime': 'mov',
      'application/pdf': 'pdf'
    }
    return extensions[mimeType] || 'bin'
  }
}

module.exports = new S3Service()

S3 API Endpoints

javascript
// api/s3.js
const express = require('express')
const router = express.Router()
const s3Service = require('../services/s3')
const { authenticate } = require('../middleware/auth')

// Start multipart upload
router.get('/start-upload', authenticate, async (req, res) => {
  try {
    const { workspaceId, fileType } = req.query
    
    // Validate workspace access
    if (!req.user.accessibleWorkspaces.includes(parseInt(workspaceId))) {
      return res.status(403).json({ error: 'Access denied' })
    }

    // Generate asset ID
    const assetId = await generateAssetId()
    
    const { uploadId, key } = await s3Service.startMultipartUpload(
      workspaceId,
      assetId,
      fileType
    )

    res.json({
      uploadId,
      assetId,
      key
    })
  } catch (error) {
    console.error('Start upload error:', error)
    res.status(500).json({ error: 'Failed to start upload' })
  }
})

// Get presigned URL for upload part
router.get('/get-upload-url', authenticate, async (req, res) => {
  try {
    const { PartNumber, UploadId, assetId, workspaceId } = req.query
    
    const url = await s3Service.getUploadPartUrl(
      workspaceId,
      assetId,
      UploadId,
      parseInt(PartNumber)
    )

    res.json({
      url,
      expiresIn: 3600
    })
  } catch (error) {
    console.error('Get upload URL error:', error)
    res.status(500).json({ error: 'Failed to get upload URL' })
  }
})

// Complete multipart upload
router.post('/complete-upload', authenticate, async (req, res) => {
  try {
    const { uploadId, assetId, workspaceId, parts } = req.body
    
    const result = await s3Service.completeMultipartUpload(
      workspaceId,
      assetId,
      uploadId,
      parts
    )

    res.json({
      success: true,
      location: result.location,
      key: result.key,
      etag: result.etag
    })
  } catch (error) {
    console.error('Complete upload error:', error)
    res.status(500).json({ error: 'Failed to complete upload' })
  }
})

// Get presigned download URL
router.get('/download-url', authenticate, async (req, res) => {
  try {
    const { workspaceId, assetId, expiresIn } = req.query
    
    const url = await s3Service.getPresignedDownloadUrl(
      workspaceId,
      assetId,
      expiresIn ? parseInt(expiresIn) : 3600
    )

    res.json({
      url,
      expiresIn: expiresIn || 3600
    })
  } catch (error) {
    console.error('Get download URL error:', error)
    res.status(500).json({ error: 'Failed to get download URL' })
  }
})

// Delete asset
router.delete('/delete', authenticate, async (req, res) => {
  try {
    const { workspaceId, assetId } = req.query
    
    await s3Service.deleteAsset(workspaceId, assetId)

    res.json({ success: true })
  } catch (error) {
    console.error('Delete error:', error)
    res.status(500).json({ error: 'Failed to delete asset' })
  }
})

module.exports = router

Frontend Implementation

Multipart Upload Service

javascript
// utils/multipartUpload.js
export class MultipartUploadService {
  constructor(axios, workspaceId) {
    this.axios = axios
    this.workspaceId = workspaceId
    this.chunkSize = 5 * 1024 * 1024 // 5MB chunks
  }

  async uploadFile(file, onProgress) {
    // Start multipart upload
    const { data: uploadInfo } = await this.axios.get('/s3/start-upload', {
      params: {
        workspaceId: this.workspaceId,
        fileType: file.type
      }
    })

    const { uploadId, assetId } = uploadInfo

    // Split file into chunks
    const chunks = this.splitFile(file)
    const parts = []

    try {
      // Upload each chunk
      for (let i = 0; i < chunks.length; i++) {
        const chunk = chunks[i]
        
        // Get presigned URL for this part
        const { data: urlData } = await this.axios.get('/s3/get-upload-url', {
          params: {
            PartNumber: i + 1,
            UploadId: uploadId,
            assetId: assetId,
            workspaceId: this.workspaceId
          }
        })

        // Upload chunk directly to S3
        const etag = await this.uploadChunk(urlData.url, chunk)
        
        parts.push({
          ETag: etag,
          PartNumber: i + 1
        })

        // Update progress
        const progress = ((i + 1) / chunks.length) * 100
        if (onProgress) {
          onProgress({
            loaded: (i + 1) * this.chunkSize,
            total: file.size,
            percentage: progress
          })
        }
      }

      // Complete multipart upload
      const { data: result } = await this.axios.post('/s3/complete-upload', {
        uploadId,
        assetId,
        workspaceId: this.workspaceId,
        parts
      })

      return {
        assetId,
        key: result.key,
        location: result.location
      }
    } catch (error) {
      // Abort upload on error
      await this.axios.post('/s3/abort-upload', {
        uploadId,
        assetId,
        workspaceId: this.workspaceId
      })
      throw error
    }
  }

  splitFile(file) {
    const chunks = []
    let start = 0
    
    while (start < file.size) {
      const end = Math.min(start + this.chunkSize, file.size)
      chunks.push(file.slice(start, end))
      start = end
    }
    
    return chunks
  }

  async uploadChunk(url, chunk) {
    const response = await fetch(url, {
      method: 'PUT',
      body: chunk,
      headers: {
        'Content-Type': 'application/octet-stream'
      }
    })

    if (!response.ok) {
      throw new Error(`Upload failed: ${response.statusText}`)
    }

    // Extract ETag from response headers
    const etag = response.headers.get('ETag')
    return etag.replace(/"/g, '')
  }
}

File Upload Component

vue
<template>
  <div class="file-upload">
    <input
      ref="fileInput"
      type="file"
      multiple
      @change="handleFileSelect"
      style="display: none"
    />
    
    <v-btn @click="$refs.fileInput.click()">Select Files</v-btn>
    
    <v-list v-if="uploadQueue.length > 0">
      <v-list-item
        v-for="item in uploadQueue"
        :key="item.id"
      >
        <v-list-item-content>
          <v-list-item-title>{{ item.file.name }}</v-list-item-title>
          <v-progress-linear
            :value="item.progress"
            :color="item.status === 'error' ? 'error' : 'primary'"
          />
          <v-list-item-subtitle>
            {{ item.status }} - {{ item.progress }}%
          </v-list-item-subtitle>
        </v-list-item-content>
        <v-list-item-action>
          <v-btn
            v-if="item.status === 'error'"
            icon
            @click="retryUpload(item)"
          >
            <RefreshIcon /> <!-- Custom SVG icon component - import from @/components/svg/RefreshIcon.vue -->
          </v-btn>
          <v-btn
            v-if="item.status !== 'uploading'"
            icon
            @click="removeFromQueue(item)"
          >
            <CloseIcon /> <!-- Custom SVG icon component - import from @/components/svg/CloseIcon.vue -->
          </v-btn>
        </v-list-item-action>
      </v-list-item>
    </v-list>
  </div>
</template>

<script>
import { MultipartUploadService } from '~/utils/multipartUpload'

export default {
  data() {
    return {
      uploadQueue: [],
      uploadService: null
    }
  },
  mounted() {
    this.uploadService = new MultipartUploadService(
      this.$axios,
      this.$route.params.workspace_id
    )
  },
  methods: {
    handleFileSelect(event) {
      const files = Array.from(event.target.files)
      files.forEach(file => {
        this.addToQueue(file)
      })
    },
    addToQueue(file) {
      const item = {
        id: Date.now() + Math.random(),
        file,
        progress: 0,
        status: 'pending'
      }
      this.uploadQueue.push(item)
      this.uploadFile(item)
    },
    async uploadFile(item) {
      item.status = 'uploading'
      
      try {
        const result = await this.uploadService.uploadFile(
          item.file,
          (progress) => {
            item.progress = progress.percentage
          }
        )
        
        item.status = 'completed'
        item.result = result
        
        // Notify parent component
        this.$emit('upload-complete', result)
      } catch (error) {
        item.status = 'error'
        item.error = error.message
        this.$toast.error(`Upload failed: ${error.message}`)
      }
    },
    retryUpload(item) {
      item.status = 'pending'
      item.progress = 0
      item.error = null
      this.uploadFile(item)
    },
    removeFromQueue(item) {
      const index = this.uploadQueue.findIndex(i => i.id === item.id)
      if (index > -1) {
        this.uploadQueue.splice(index, 1)
      }
    }
  }
}
</script>

API Design

Start Multipart Upload

Endpoint: GET /s3/start-upload

Query Parameters:

  • workspaceId (required) - Workspace identifier
  • fileType (required) - MIME type of the file

Response:

json
{
  "uploadId": "2~abc123def456",
  "assetId": "789",
  "key": "456/digital_assets/789.jpg"
}

Get Upload Part URL

Endpoint: GET /s3/get-upload-url

Query Parameters:

  • PartNumber (required) - Part number (1-indexed)
  • UploadId (required) - Multipart upload ID
  • assetId (required) - Asset identifier
  • workspaceId (required) - Workspace identifier

Response:

json
{
  "url": "https://bucket.s3.amazonaws.com/456/digital_assets/789.jpg?X-Amz-Algorithm=...",
  "expiresIn": 3600
}

Complete Multipart Upload

Endpoint: POST /s3/complete-upload

Request Body:

json
{
  "uploadId": "2~abc123def456",
  "assetId": "789",
  "workspaceId": "456",
  "parts": [
    {
      "ETag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
      "PartNumber": 1
    },
    {
      "ETag": "\"e4d909c290d0fb1ca068ffaddf22cbd0\"",
      "PartNumber": 2
    }
  ]
}

Response:

json
{
  "success": true,
  "location": "https://bucket.s3.amazonaws.com/456/digital_assets/789.jpg",
  "key": "456/digital_assets/789.jpg",
  "etag": "\"abc123def456\""
}

Get Download URL

Endpoint: GET /s3/download-url

Query Parameters:

  • workspaceId (required) - Workspace identifier
  • assetId (required) - Asset identifier
  • expiresIn (optional) - URL expiration in seconds (default: 3600)

Response:

json
{
  "url": "https://bucket.s3.amazonaws.com/456/digital_assets/789.jpg?X-Amz-Algorithm=...",
  "expiresIn": 3600
}

Workflow

Multipart Upload Flow

1. Frontend calls /s3/start-upload

2. Backend creates multipart upload

3. Returns uploadId and assetId

4. Frontend splits file into chunks (5MB each)

5. For each chunk:
   - Call /s3/get-upload-url
   - Get presigned URL
   - Upload chunk directly to S3
   - Store ETag and PartNumber

6. After all chunks uploaded:
   - Call /s3/complete-upload
   - Send all part ETags

7. Backend completes multipart upload

8. Asset record created in database

9. Thumbnail generated (async)

10. Asset indexed in Typesense (async)

Direct Upload Flow (Small Files)

1. Frontend creates FormData with file

2. POST to /api/assets/upload

3. Backend validates file

4. Backend uploads to S3 using PutObject

5. Asset record created

6. Thumbnail generated

7. Asset indexed

8. Response with asset data

Sample Data

Upload Request Example

javascript
{
  workspaceId: "456",
  fileType: "image/jpeg",
  fileName: "summer-campaign-banner.jpg",
  fileSize: 5242880 // 5MB
}

Multipart Upload Parts

javascript
[
  {
    ETag: "\"d41d8cd98f00b204e9800998ecf8427e\"",
    PartNumber: 1,
    Size: 5242880
  },
  {
    ETag: "\"e4d909c290d0fb1ca068ffaddf22cbd0\"",
    PartNumber: 2,
    Size: 5242880
  },
  {
    ETag: "\"f5e9a0d391e11b1db079ffbeef33cce1\"",
    PartNumber: 3,
    Size: 1048576 // Last chunk
  }
]

S3 Object Metadata

javascript
{
  Key: "456/digital_assets/789.jpg",
  Bucket: "my-dam-bucket",
  ETag: "\"abc123def456\"",
  Location: "https://my-dam-bucket.s3.amazonaws.com/456/digital_assets/789.jpg",
  Metadata: {
    workspace_id: "456",
    asset_id: "789"
  }
}