Integrating S3-Compatible Asset Storage
This guide demonstrates how to integrate S3-compatible asset storage into your Vendure application using multiple cloud storage platforms. You'll learn to configure a single, platform-agnostic storage solution that works seamlessly with AWS S3, DigitalOcean Spaces, MinIO, CloudFlare R2, and Supabase Storage.
Working Example Repository
This guide is based on the s3-file-storage example. Refer to the complete working code for full implementation details.
Prerequisites
- Node.js 20+ with npm package manager
- An existing Vendure project created with the Vendure create command
- An account with one of the supported S3-compatible storage providers
S3-Compatible Storage Provider Setup
Configure your chosen storage provider by following the setup instructions for your preferred platform:
- AWS S3
- Supabase Storage
- DigitalOcean Spaces
- CloudFlare R2
- Hetzner Object Storage
- MinIO
Setting up AWS S3
-
Create an S3 Bucket
- Navigate to AWS S3 Console
- Click "Create bucket"
- Enter a unique bucket name (e.g.,
my-vendure-assets
) - Choose your preferred AWS region
- Configure permissions as needed for public asset access
-
Create IAM User with S3 Permissions
- Go to AWS IAM Console
- Navigate to "Users" and click "Create user"
- Enter username and proceeed to
Set Permissions
- Select the
Attach existing policies directly
option - Attach the
AmazonS3FullAccess
policy (or create a custom policy with minimal permissions)
-
Generate Access Keys
- After creating the user, click on the user name
- Go to "Security credentials" tab
- Click "Create access key" and select "Application running on AWS service"
- Copy the Access Key ID and Secret Access Key (Download the CSV file if needed)
-
Environment Variables
# AWS S3 Configuration
S3_BUCKET=my-vendure-assets
S3_ACCESS_KEY_ID=AKIA...
S3_SECRET_ACCESS_KEY=wJalrXUtn...
S3_REGION=us-east-1
# Leave S3_ENDPOINT empty for AWS S3
# Leave S3_FORCE_PATH_STYLE empty for AWS S3
Setting up Supabase S3 Storage
-
Create Supabase Project
- Sign up at Supabase
- Click "New project" and fill in project details
- Wait for project initialization to complete
-
Navigate to Storage
- Go to "Storage" section in your project dashboard
- Click "Create a new bucket"
- Enter bucket name:
assets
(or your preferred name) - Configure bucket to be public if you need direct asset access
- Click "Create bucket"
-
Generate Service Role Key
- Navigate to "Settings" → "API"
- Copy your Project URL and Project Reference ID
- Copy the service_role key (keep this secure)
- The service_role key provides full access to your project
-
Environment Variables
# Supabase Storage Configuration
S3_BUCKET=assets
S3_ACCESS_KEY_ID=your-supabase-access-key-id
S3_SECRET_ACCESS_KEY=your-service-role-key
S3_REGION=us-east-1
S3_ENDPOINT=https://your-project-ref.supabase.co/storage/v1/s3
S3_FORCE_PATH_STYLE=trueinfoReplace
your-project-ref
with your actual Supabase project reference ID found in your project settings.
Setting up DigitalOcean Spaces
-
Create a DigitalOcean Account
- Sign up at DigitalOcean
- Navigate to the Spaces section in your dashboard
-
Create a Space
- Click "Create a Space"
- Choose your datacenter region (e.g.,
fra1
for Frankfurt) - Enter a unique Space name (e.g.,
my-vendure-assets
) - Choose File Listing permissions based on your needs
- Optionally enable CDN to improve global asset delivery
-
Generate Spaces Access Keys
- Go to API Tokens page
- Click "Generate New Key" in the Spaces Keys section
- Enter a name for your key
- Copy the generated Key and Secret
-
Configure CORS Policy (Optional) For browser-based uploads, configure CORS in your Space settings:
[
{
"allowed_origins": ["https://yourdomain.com"],
"allowed_methods": ["GET", "POST", "PUT"],
"allowed_headers": ["*"],
"max_age": 3000
}
] -
Environment Variables
# DigitalOcean Spaces Configuration
S3_BUCKET=my-vendure-assets
S3_ACCESS_KEY_ID=DO00...
S3_SECRET_ACCESS_KEY=wJalrXUtn...
S3_REGION=fra1
S3_ENDPOINT=https://fra1.digitaloceanspaces.com
S3_FORCE_PATH_STYLE=falsetipUse the regional endpoint (e.g.,
https://fra1.digitaloceanspaces.com
) not the CDN endpoint. The AWS SDK constructs URLs automatically.
Setting up CloudFlare R2
-
Create CloudFlare Account
- Sign up at CloudFlare
- Complete account verification process
-
Enable R2 Object Storage
- Navigate to R2 Object Storage in your dashboard
- You may need to provide payment information (R2 has generous free tier)
- Accept the R2 terms of service
-
Create R2 Bucket
- Click "Create bucket"
- Enter a globally unique bucket name:
vendure-assets
- Select "Automatic" for location optimization
- Choose "Standard" storage class for most use cases
- Click "Create bucket" to finalize
-
Generate API Tokens
- Go to "Manage R2 API tokens" section
- Click "Create API token"
- Configure token name: "Vendure R2 Token"
- Under Permissions, select "Object Read & Write"
- Optionally restrict to specific buckets under "Account resources"
- Click "Create API token"
-
Retrieve Credentials
- Copy the Access Key ID and Secret Access Key
- Copy the jurisdiction-specific endpoint for S3 clients
- Note your account ID from the URL or dashboard
-
Environment Variables
# CloudFlare R2 Configuration
S3_BUCKET=vendure-assets
S3_ACCESS_KEY_ID=your-r2-access-key
S3_SECRET_ACCESS_KEY=your-r2-secret-key
S3_REGION=auto
S3_ENDPOINT=https://your-account-id.r2.cloudflarestorage.com
S3_FORCE_PATH_STYLE=truewarningReplace
your-account-id
with your actual CloudFlare account ID. If using a custom domain, updateS3_FILE_URL
to point to your custom domain withhttps://
.
Setting up Hetzner Object Storage
-
Create Hetzner Cloud Account
- Sign up at Hetzner Cloud
- Complete account verification and billing setup
- Navigate to the Hetzner Cloud Console
-
Access Object Storage Service
- In the Hetzner Cloud Console, navigate to "Object Storage" in the left sidebar
- If Object Storage is not visible, you may need to request access (service availability varies by region)
- Accept the Object Storage terms of service when prompted
-
Create Storage Bucket
- Click "Create Bucket" in the Object Storage section
- Enter a globally unique bucket name (e.g.,
vendure-assets-yourname
) - Select your preferred location (e.g.,
fsn1
for Falkenstein, Germany) - Choose bucket visibility:
- Private: Requires authentication for all access
- Public: Allows public read access for assets
- Click "Create" to create the bucket
-
Generate S3 API Credentials
- In the Object Storage section, navigate to "API Credentials" or "Access Keys"
- Click "Generate new credentials" or "Create access key"
- Provide a name for the credentials (e.g., "Vendure API Key")
- Copy the generated Access Key and Secret Key
- ⚠️ Important: Save the Secret Key immediately as it cannot be viewed again
-
Environment Variables
# Hetzner Object Storage Configuration
S3_BUCKET=vendure-assets-yourname
S3_ACCESS_KEY_ID=your-hetzner-access-key
S3_SECRET_ACCESS_KEY=your-hetzner-secret-key
S3_REGION=fsn1
S3_ENDPOINT=https://fsn1.your-objectstorage.com
S3_FORCE_PATH_STYLE=truenoteReplace
fsn1
with your chosen location (e.g.,nbg1
for Nuremberg). The endpoint URL will match your bucket's location. Ensure the region and endpoint location match.
Setting up MinIO (Self-Hosted)
-
Install MinIO Server
Option A: Using Docker (Recommended)
# Create a docker-compose.yml file
docker compose up -d minioOption B: Direct Installation
- Download MinIO from MinIO Downloads
- Follow installation instructions for your operating system
- Start MinIO server with:
minio server /data --console-address ":9001"
-
Access MinIO Console
- Open http://localhost:9001 in your browser
- Default credentials:
minioadmin
/minioadmin
- Change these credentials in production environments
-
Create Access Keys
The MinIO web console in development setups typically only shows bucket management. For access key creation, use the MinIO CLI:
Install MinIO Client (if not already installed):
# macOS
brew install minio/stable/mc
# Linux
curl https://dl.min.io/client/mc/release/linux-amd64/mc \
--create-dirs -o $HOME/minio-binaries/mc
chmod +x $HOME/minio-binaries/mc
export PATH=$PATH:$HOME/minio-binaries/
# Windows
# Download mc.exe from https://dl.min.io/client/mc/release/windows-amd64/mc.exeConfigure and create access keys:
# Set up MinIO client alias (replace with your MinIO server details)
mc alias set local http://localhost:9000 minioadmin minioadmin
# Create a service account (access key pair)
mc admin user svcacct add local minioadmin
# This will output something like:
# Access Key: AKIAIOSFODNN7EXAMPLE
# Secret Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY⚠️ Important: Save both keys immediately as the Secret Key won't be shown again
-
Create Storage Bucket
- In the MinIO console, you should see a "Buckets" section showing available buckets
- Click "Create Bucket" (usually a + icon or button)
- Enter bucket name:
vendure-assets
- Click "Create" to create the bucket
Alternative using CLI:
# Create bucket using MinIO client
mc mb local/vendure-assets -
Configure Public Access Policy
For public asset access, set the bucket policy using the MinIO CLI (console UI may not have policy editor):
# Create a policy file for public read access
cat > public-read-policy.json << EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::vendure-assets/*"
}
]
}
EOF
# Apply the policy to the bucket
mc anonymous set download local/vendure-assets
# Or apply the JSON policy directly
mc admin policy create local public-read public-read-policy.jsonAlternative simple approach:
# Make bucket publicly readable (simpler method)
mc anonymous set download local/vendure-assets -
Environment Variables
# MinIO Configuration
S3_BUCKET=vendure-assets
S3_ACCESS_KEY_ID=minio-access-key
S3_SECRET_ACCESS_KEY=minio-secret-key
S3_REGION=us-east-1
S3_ENDPOINT=http://localhost:9000
S3_FORCE_PATH_STYLE=true
Vendure Configuration
Configure your Vendure application to use S3-compatible asset storage by modifying your vendure-config.ts
:
import { VendureConfig } from '@vendure/core';
import {
AssetServerPlugin,
configureS3AssetStorage
} from '@vendure/asset-server-plugin';
import 'dotenv/config';
import path from 'path';
const IS_DEV = process.env.APP_ENV === 'dev';
export const config: VendureConfig = {
// ... other configuration options
plugins: [
AssetServerPlugin.init({
route: 'assets',
assetUploadDir: path.join(__dirname, '../static/assets'),
assetUrlPrefix: IS_DEV ? undefined : 'https://www.my-shop.com/assets/',
// S3-Compatible Storage Configuration
// Dynamically switches between local storage and S3 based on environment
storageStrategyFactory: process.env.S3_BUCKET
? configureS3AssetStorage({
bucket: process.env.S3_BUCKET,
credentials: {
accessKeyId: process.env.S3_ACCESS_KEY_ID!,
secretAccessKey: process.env.S3_SECRET_ACCESS_KEY!,
},
nativeS3Configuration: {
// Platform-specific endpoint configuration
endpoint: process.env.S3_ENDPOINT,
region: process.env.S3_REGION,
forcePathStyle: process.env.S3_FORCE_PATH_STYLE === 'true',
signatureVersion: 'v4',
},
})
: undefined, // Fallback to local storage when S3 not configured
}),
// ... other plugins
],
};
IMPORTANT: The configuration uses a conditional approach - when S3_BUCKET
is set, it activates S3 storage; otherwise, it falls back to local file storage. This enables seamless development-to-production transitions.
Environment Configuration
Create a .env
file in your project root with your chosen storage provider configuration:
# Basic Vendure Configuration
APP_ENV=dev
SUPERADMIN_USERNAME=superadmin
SUPERADMIN_PASSWORD=superadmin
COOKIE_SECRET=your-cookie-secret-32-characters-min
# S3-Compatible Storage Configuration
S3_BUCKET=your-bucket-name
S3_ACCESS_KEY_ID=your-access-key-id
S3_SECRET_ACCESS_KEY=your-secret-access-key
S3_REGION=your-region
S3_ENDPOINT=your-endpoint-url
S3_FORCE_PATH_STYLE=true-or-false
Preconfigured environment examples for each storage provider are available in the s3-file-storage example repository.
Testing Your Configuration
Verify your S3 storage configuration works correctly:
-
Start your Vendure server:
npm run dev:server
-
Access the Admin UI:
- Open http://localhost:3000/admin
- Log in with your superadmin credentials
-
Test asset upload:
- Navigate to "Catalog" → "Assets"
- Click "Upload assets"
- Select a test image and upload
- Verify the image appears in the asset gallery
-
Verify storage backend:
- Check your S3 bucket/storage service for the uploaded file
- Confirm the asset URL is accessible
Advanced Configuration
Custom Asset URL Prefix
For production deployments with CDN or custom domains:
AssetServerPlugin.init({
route: 'assets',
assetUrlPrefix: 'https://cdn.yourdomain.com/assets/',
storageStrategyFactory: process.env.S3_BUCKET
? configureS3AssetStorage({
// ... S3 configuration
})
: undefined,
});
Environment-Specific Configuration
Use different buckets for different environments:
# Development
S3_BUCKET=vendure-dev-assets
# Staging
S3_BUCKET=vendure-staging-assets
# Production
S3_BUCKET=vendure-prod-assets
Migration Between Platforms
Switching between storage providers requires updating only the environment variables:
# From AWS S3 to CloudFlare R2
# Change these variables:
S3_ENDPOINT=https://account-id.r2.cloudflarestorage.com
S3_FORCE_PATH_STYLE=true
# Keep the same bucket name and credentials structure
Troubleshooting
Common Issues
-
"Access Denied" Errors:
- Verify your access key has proper permissions
- Check bucket policies allow the required operations
- Ensure credentials are correctly set in environment variables
-
"Bucket Not Found" Errors:
- Verify bucket name matches exactly (case-sensitive)
- Check that
S3_REGION
matches your bucket's region - For MinIO/R2, ensure
S3_FORCE_PATH_STYLE=true
-
Assets Not Loading:
- Verify bucket has public read access (if needed)
- Check CORS configuration for browser-based access
- Ensure
assetUrlPrefix
matches your actual domain
-
Connection Timeout Issues:
- Verify
S3_ENDPOINT
URL is correct and accessible - Check firewall settings for outbound connections
- For self-hosted MinIO, ensure server is running and accessible
- Verify
Conclusion
You now have a robust, platform-agnostic S3-compatible asset storage solution integrated with your Vendure application. This configuration provides:
- Seamless switching between storage providers via environment variables
- Development-to-production workflow with local storage fallback
- Built-in compatibility with major S3-compatible services
- Production-ready configuration patterns
The unified approach eliminates the need for custom storage plugins while maintaining flexibility across different cloud storage platforms. Your assets will be reliably stored and served regardless of which S3-compatible provider you choose.
Next Steps
- Set up CDN integration for improved global asset delivery
- Implement backup strategies for critical assets
- Configure monitoring and alerting for storage operations
- Consider implementing asset optimization and transformation workflows