Storage Configuration
Complete guide to configure file storage with S3, R2, Vultr, Wasabi, GCS, Azure Blob, and DigitalOcean Spaces
Storage Configuration
Configure file storage for uploads, media, and assets in your Vyral platform. This guide covers setup for all supported cloud storage providers.
Supported Storage Providers
Vyral supports multiple storage backends for different needs and budgets:
Provider | Best For | Pricing | Unique Features |
---|---|---|---|
Local Storage | Development/Testing | Free | Simple, no dependencies |
AWS S3 | Production at scale | Pay-per-use | Industry standard, extensive features |
Cloudflare R2 | Cost-effective production | No egress fees | S3-compatible, free bandwidth |
Vultr Object Storage | Budget-friendly | Fixed pricing | Simple pricing, good performance |
Wasabi | High storage needs | $6.99/TB/month | No egress fees, fast uploads |
Google Cloud Storage | Google ecosystem | Pay-per-use | AI/ML integrations |
Azure Blob | Microsoft ecosystem | Pay-per-use | Enterprise features |
DigitalOcean Spaces | Simple setup | $5/month start | CDN included, easy to use |
Quick Start
Choose your storage provider and follow the setup guide:
AWS S3 Setup
AWS S3 is the industry standard for cloud storage, offering unmatched reliability, scalability, and integration options.
Prerequisites
- AWS account with billing enabled
- Administrator access to create IAM users and S3 buckets
- Basic understanding of AWS services
Step 1: Create S3 Bucket
Sign in to AWS Console
- Navigate to AWS Console
- Sign in with your root account or IAM user with admin privileges
Navigate to S3 Service
- Search for "S3" in the services search bar
- Click on "S3" to open the S3 console
Create a New Bucket
- Click the orange "Create bucket" button
- Enter bucket name:
vyral-uploads-production
(must be globally unique) - If name is taken, try:
vyral-uploads-yourcompany
orvyral-prod-randomstring
Configure Region
- Select AWS Region closest to your users for better performance
- Recommended regions:
- US East (N. Virginia):
us-east-1
- Default, cheapest - US West (Oregon):
us-west-2
- West Coast USA - EU (Ireland):
eu-west-1
- Europe - Asia Pacific (Singapore):
ap-southeast-1
- Southeast Asia
- US East (N. Virginia):
Configure Object Ownership
- Select "ACLs enabled"
- Choose "Bucket owner preferred"
- This allows you to manage object permissions via ACLs
Configure Public Access Settings
- Uncheck "Block all public access"
- Check the acknowledgment box
- Individual blocks to uncheck:
- ✗ Block public access to buckets and objects granted through new access control lists (ACLs)
- ✗ Block public access to buckets and objects granted through any access control lists (ACLs)
- ✗ Block public access to buckets and objects granted through new public bucket or access point policies
- ✗ Block public and cross-account access to buckets and objects through any public bucket or access point policies
Configure Bucket Versioning
- Keep "Disable" selected (unless you need version history)
- Versioning increases storage costs
Configure Encryption
- Enable "Server-side encryption"
- Choose "Amazon S3 managed keys (SSE-S3)"
- Keep "Enable Bucket Key" checked for cost optimization
Review and Create
- Review all settings
- Click "Create bucket"
Step 2: Configure Bucket Policy
Open Bucket Settings
- Click on your newly created bucket name
- Navigate to the "Permissions" tab
Edit Bucket Policy
- Scroll to "Bucket policy" section
- Click "Edit"
Add Public Read Policy
- Paste the following policy (replace
vyral-uploads-production
with your bucket name):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::vyral-uploads-production/*"
},
{
"Sid": "AllowPublicListBucket",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::vyral-uploads-production",
"Condition": {
"StringLike": {
"s3:prefix": ["public/*", "images/*", "videos/*"]
}
}
}
]
}
Save Changes
- Click "Save changes"
- Confirm the public access warning
Step 3: Configure CORS
Navigate to CORS Configuration
- In bucket Permissions tab
- Scroll to "Cross-origin resource sharing (CORS)"
- Click "Edit"
Add CORS Rules
[
{
"AllowedHeaders": ["*"],
"AllowedMethods": ["GET", "POST", "PUT", "DELETE", "HEAD"],
"AllowedOrigins": ["*"],
"ExposeHeaders": ["ETag", "x-amz-server-side-encryption", "x-amz-request-id", "x-amz-id-2"],
"MaxAgeSeconds": 3000
}
]
Note: For production, replace "*"
in AllowedOrigins with your actual domain
Save CORS Configuration
- Click "Save changes"
Step 4: Create IAM User with Proper Permissions
Navigate to IAM Service
- Go to AWS Console home
- Search for "IAM" and click on it
- Click "Users" in the left sidebar
Create New User
- Click "Create user"
- User name:
vyral-s3-user
- Click "Next"
Set Permissions - Create Policy First
- Select "Attach policies directly"
- Click "Create policy"
- This opens a new tab
Create Custom Policy
- In the new tab, select "JSON" editor
- Paste this comprehensive policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowS3BucketOperations",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation",
"s3:GetBucketVersioning",
"s3:GetBucketAcl",
"s3:GetBucketCORS",
"s3:GetBucketPolicy",
"s3:GetBucketPolicyStatus",
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketWebsite",
"s3:ListBucketVersions",
"s3:ListBucketMultipartUploads"
],
"Resource": "arn:aws:s3:::vyral-uploads-production"
},
{
"Sid": "AllowS3ObjectOperations",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetObject",
"s3:GetObjectAcl",
"s3:GetObjectVersion",
"s3:DeleteObject",
"s3:DeleteObjectVersion",
"s3:ListMultipartUploadParts",
"s3:AbortMultipartUpload",
"s3:RestoreObject",
"s3:GetObjectMetadata",
"s3:GetObjectVersionMetadata",
"s3:PutObjectRetention",
"s3:GetObjectRetention",
"s3:PutObjectLegalHold",
"s3:GetObjectLegalHold",
"s3:GetObjectTagging",
"s3:PutObjectTagging",
"s3:DeleteObjectTagging"
],
"Resource": "arn:aws:s3:::vyral-uploads-production/*"
},
{
"Sid": "AllowCloudFrontInvalidation",
"Effect": "Allow",
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:GetInvalidation",
"cloudfront:ListInvalidations"
],
"Resource": "*"
}
]
}
Replace vyral-uploads-production
with your actual bucket name
Name and Create Policy
- Click "Next"
- Policy name:
VyralS3FullAccess
- Description: "Full access to Vyral S3 bucket for application use"
- Click "Create policy"
- Close this tab and return to the user creation tab
Attach Policy to User
- Back in the user creation tab, click the refresh button
- Search for
VyralS3FullAccess
- Check the box next to your policy
- Click "Next"
Review and Create User
- Review the user details
- Click "Create user"
Create Access Keys
- Click on the newly created user name
- Go to "Security credentials" tab
- Scroll to "Access keys" section
- Click "Create access key"
Select Use Case
- Select "Application running outside AWS"
- Check the confirmation box
- Click "Next"
Set Description Tag
- Add description: "Vyral application S3 access"
- Click "Create access key"
Save Credentials
- IMPORTANT: Save these credentials securely, you won't see them again!
- Access key ID:
AKIA...
(20 characters) - Secret access key:
...
(40 characters) - Click "Download .csv file" for backup
- Click "Done"
Step 5: Configure ACLs (Access Control Lists)
Enable ACLs on Bucket
- Go to your S3 bucket
- Click "Permissions" tab
- Find "Object Ownership" section
- Click "Edit"
Configure Object Ownership
- Select "ACLs enabled"
- Choose "Bucket owner preferred"
- Click "Save changes"
Set Default ACLs for Uploads In your application code, set ACLs when uploading:
const AWS = require('aws-sdk');
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: process.env.AWS_REGION
});
// Upload with public-read ACL
const uploadParams = {
Bucket: 'vyral-uploads-production',
Key: 'path/to/file.jpg',
Body: fileContent,
ACL: 'public-read', // Makes file publicly accessible
ContentType: 'image/jpeg',
CacheControl: 'max-age=31536000', // 1 year cache
Metadata: {
'uploaded-by': userId,
'upload-date': new Date().toISOString()
}
};
s3.upload(uploadParams, (err, data) => {
if (err) console.error('Upload failed:', err);
else console.log('Upload successful:', data.Location);
});
ACL Options Explained
private
: Owner has full control (default)public-read
: Everyone can read, owner can writepublic-read-write
: Everyone can read and write (not recommended)authenticated-read
: AWS authenticated users can readaws-exec-read
: Amazon EC2 gets read accessbucket-owner-read
: Bucket owner gets read accessbucket-owner-full-control
: Bucket owner gets full control
Step 6: Configure CloudFront CDN (Highly Recommended)
CloudFront CDN significantly improves performance and reduces S3 bandwidth costs by caching content at edge locations worldwide.
Navigate to CloudFront
- Go to AWS Console
- Search for "CloudFront"
- Click "Create Distribution"
Configure Origin
- Origin Domain: Select your S3 bucket from dropdown or enter
vyral-uploads-production.s3.amazonaws.com
- Origin Path: Leave empty
- Name: Auto-filled
- S3 bucket access: "Yes use OAI (legacy)" for private bucket or "Don't use OAI" for public
- If using OAI:
- Create new OAI
- Update bucket policy: Yes
Default Cache Behavior Settings
- Path Pattern: Default (*)
- Compress objects automatically: Yes
- Viewer Protocol Policy: Redirect HTTP to HTTPS
- Allowed HTTP Methods: GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE
- Restrict viewer access: No
Cache Key and Origin Requests
- Cache policy: CachingOptimized (Recommended)
- Origin request policy: CORS-S3Origin
- Response headers policy: SimpleCORS
Distribution Settings
- Price Class: Use all edge locations (best performance)
- AWS WAF web ACL: None (unless you need DDoS protection)
- Alternate domain name (CNAME): Add
cdn.yourdomain.com
if you have SSL cert - Custom SSL certificate: Select from ACM or import
- Security policy: TLSv1.2_2021 (Recommended)
- Supported HTTP versions: HTTP/2, HTTP/1.1, HTTP/1.0
- Standard logging: Off (or configure S3 bucket for logs)
- IPv6: Enabled
Create Distribution
- Review settings
- Click "Create distribution"
- Wait 15-20 minutes for deployment
- Note the CloudFront domain:
d1234abcd.cloudfront.net
Step 7: Configure in Vyral Admin Dashboard
Navigate to Admin Panel
- Go to your Vyral admin dashboard
- Navigate to Settings → Storage Configuration
Enter S3 Configuration
{
"provider": "s3",
"s3Config": {
"accessKey": "AKIA...",
"secretKey": "your-secret-key-here",
"bucket": "vyral-uploads-production",
"region": "us-east-1",
"endpoint": "https://s3.amazonaws.com",
"cdnUrl": "https://d1234abcd.cloudfront.net",
"forcePathStyle": false,
"signatureVersion": "v4"
}
}
Test Configuration
- Click "Test Connection"
- Try uploading a test file
- Verify file is accessible via CDN URL
Step 8: Set Up Lifecycle Policies (Cost Optimization)
Navigate to Bucket Management
- Go to S3 bucket
- Click "Management" tab
- Click "Create lifecycle rule"
Configure Rule for Old Files
- Rule name:
archive-old-content
- Status: Enabled
- Rule scope: Apply to all objects
Add Transitions
{
"Rules": [{
"Id": "ArchiveOldContent",
"Status": "Enabled",
"Transitions": [
{
"Days": 30,
"StorageClass": "STANDARD_IA"
},
{
"Days": 90,
"StorageClass": "INTELLIGENT_TIERING"
},
{
"Days": 180,
"StorageClass": "GLACIER_IR"
}
],
"NoncurrentVersionTransitions": [
{
"NoncurrentDays": 7,
"StorageClass": "GLACIER_IR"
}
],
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}]
}
Configure Expiration
- Delete objects after: 365 days (optional)
- Delete expired object delete markers: Yes
- Delete incomplete multipart uploads: 7 days
Advanced Configuration
Multipart Upload for Large Files
const AWS = require('aws-sdk');
const fs = require('fs');
async function uploadLargeFile(filePath, bucketName, key) {
const s3 = new AWS.S3();
const fileStream = fs.createReadStream(filePath);
const uploadParams = {
Bucket: bucketName,
Key: key,
Body: fileStream,
ACL: 'public-read',
ServerSideEncryption: 'AES256',
StorageClass: 'STANDARD_IA'
};
// S3 automatically uses multipart upload for files > 100MB
const options = {
partSize: 10 * 1024 * 1024, // 10 MB parts
queueSize: 4 // 4 concurrent uploads
};
try {
const data = await s3.upload(uploadParams, options)
.on('httpUploadProgress', (evt) => {
console.log(`Progress: ${parseInt((evt.loaded * 100) / evt.total)}%`);
})
.promise();
return data.Location;
} catch (err) {
console.error('Upload error:', err);
throw err;
}
}
Presigned URLs for Secure Direct Upload
// Generate presigned URL for client-side upload
async function getPresignedUploadUrl(fileName, contentType) {
const s3 = new AWS.S3();
const params = {
Bucket: 'vyral-uploads-production',
Key: `uploads/${Date.now()}-${fileName}`,
Expires: 300, // URL expires in 5 minutes
ContentType: contentType,
ACL: 'public-read'
};
try {
const url = await s3.getSignedUrlPromise('putObject', params);
return url;
} catch (err) {
console.error('Error generating presigned URL:', err);
throw err;
}
}
// Client-side upload
async function uploadToS3(file, presignedUrl) {
const response = await fetch(presignedUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type,
}
});
if (!response.ok) {
throw new Error('Upload failed');
}
return response;
}
Transfer Acceleration
Enable Transfer Acceleration
- Go to S3 bucket Properties tab
- Find "Transfer acceleration"
- Click "Edit"
- Select "Enabled"
- Save changes
Use Accelerated Endpoint
const s3 = new AWS.S3({
accessKeyId: process.env.AWS_ACCESS_KEY_ID,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,
region: 'us-east-1',
endpoint: 'https://vyral-uploads-production.s3-accelerate.amazonaws.com',
useAccelerateEndpoint: true
});
Cost Optimization Strategies
Storage Class Comparison
Storage Class | Use Case | Storage Cost/GB | Retrieval Cost | Minimum Duration |
---|---|---|---|---|
Standard | Frequently accessed (multiple times/day) | $0.023 | Free | None |
Standard-IA | Accessed monthly | $0.0125 | $0.01/GB | 30 days |
Intelligent-Tiering | Variable access patterns | $0.0125-0.023 | Free | None |
Glacier Instant | Quarterly access | $0.004 | $0.03/GB | 90 days |
Glacier Flexible | Yearly archives | $0.0036 | $0.01-0.03/GB | 90 days |
Glacier Deep Archive | Compliance archives | $0.00099 | $0.02/GB | 180 days |
Cost Reduction Tips
-
Use Intelligent-Tiering for Unknown Access Patterns
{ "Rules": [{ "Id": "IntelligentTieringRule", "Status": "Enabled", "Filter": {"Prefix": "uploads/"}, "Transitions": [{ "Days": 0, "StorageClass": "INTELLIGENT_TIERING" }] }] }
-
Delete Incomplete Multipart Uploads
# List incomplete uploads aws s3api list-multipart-uploads --bucket vyral-uploads-production # Abort old uploads aws s3api abort-multipart-upload --bucket vyral-uploads-production \ --key "path/to/file" --upload-id "upload-id"
-
Enable S3 Request Metrics
- Monitor which objects are accessed frequently
- Identify candidates for archival
- Track API usage patterns
-
Use CloudFront for Frequently Accessed Content
- Reduces S3 GET requests
- Lower bandwidth costs
- Better global performance
Monitoring and Alerts
CloudWatch Metrics Setup
Enable S3 Metrics
- Go to S3 bucket → Metrics tab
- Click "View additional charts in CloudWatch"
- Enable request metrics
- Configure daily storage metrics
Create Billing Alert
aws cloudwatch put-metric-alarm \
--alarm-name s3-cost-alert \
--alarm-description "Alert when S3 costs exceed $100" \
--metric-name EstimatedCharges \
--namespace AWS/Billing \
--statistic Maximum \
--period 86400 \
--threshold 100 \
--comparison-operator GreaterThanThreshold
Monitor Key Metrics
- BucketSizeBytes: Total storage used
- NumberOfObjects: Total object count
- AllRequests: API request count
- 4xxErrors: Client error rate
- 5xxErrors: Server error rate
Security Best Practices
1. Enable Versioning for Critical Data
aws s3api put-bucket-versioning \
--bucket vyral-uploads-production \
--versioning-configuration Status=Enabled
2. Enable MFA Delete Protection
aws s3api put-bucket-versioning \
--bucket vyral-uploads-production \
--versioning-configuration Status=Enabled,MFADelete=Enabled \
--mfa "arn:aws:iam::account-id:mfa/root-account-mfa-device 123456"
3. Enable Access Logging
{
"LoggingEnabled": {
"TargetBucket": "vyral-logs-bucket",
"TargetPrefix": "s3-access-logs/"
}
}
4. Implement Least Privilege Access
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::vyral-uploads-production/user-${aws:userid}/*"
}]
}
Troubleshooting Common Issues
403 Forbidden Error
Problem: Files upload but return 403 when accessed
Solutions:
- Check bucket policy allows public read:
aws s3api get-bucket-policy --bucket vyral-uploads-production
- Verify ACLs are enabled on bucket
- Ensure object ACL is set to
public-read
during upload - Check IAM user has
s3:PutObjectAcl
permission - Verify Block Public Access is disabled
CORS Error in Browser
Problem: Browser blocks S3 uploads with CORS error
Solutions:
- Verify CORS configuration on bucket:
aws s3api get-bucket-cors --bucket vyral-uploads-production
- Ensure AllowedOrigins includes your domain
- Add required headers to AllowedHeaders
- Clear browser cache and retry
Slow Upload Speed
Problem: Large file uploads are very slow
Solutions:
- Enable Transfer Acceleration
- Use multipart upload for files > 100MB
- Upload to nearest AWS region
- Increase part size and concurrent uploads:
const options = { partSize: 20 * 1024 * 1024, // 20MB parts queueSize: 10 // 10 concurrent parts };
AccessDenied on Policy Update
Problem: Cannot update bucket policy or CORS
Solutions:
- Ensure you're using root account or have admin privileges
- Check if bucket has s3:PutBucketPolicy permission
- Verify account isn't hitting service limits
- Try using AWS CLI instead of console:
aws s3api put-bucket-policy --bucket vyral-uploads-production \ --policy file://bucket-policy.json
High Unexpected Costs
Problem: S3 bill is higher than expected
Solutions:
- Check for incomplete multipart uploads:
aws s3api list-multipart-uploads --bucket vyral-uploads-production
- Review storage class distribution:
aws s3 ls s3://vyral-uploads-production --recursive --summarize \ --human-readable --storage-class STANDARD
- Enable lifecycle policies to move old data to cheaper storage
- Use CloudFront to reduce bandwidth costs
- Review CloudWatch metrics for unusual activity
Performance Optimization
Request Rate Guidelines
- S3 can handle 3,500 PUT/COPY/POST/DELETE requests per second per prefix
- 5,500 GET/HEAD requests per second per prefix
- Use random prefixes for better distribution:
// Good: Random prefix distribution const key = `${uuid.v4().substr(0, 2)}/${userId}/${fileName}`; // Bad: Sequential prefix const key = `2024/01/15/${fileName}`;
Optimize for Your Use Case
Use Case | Optimization Strategy |
---|---|
User avatars | CloudFront + long cache headers |
Video streaming | CloudFront + byte-range requests |
Large uploads | Multipart + Transfer Acceleration |
Thumbnails | Lambda@Edge for on-the-fly generation |
Backups | Glacier Deep Archive + lifecycle rules |
Cloudflare R2 Setup
R2 offers S3-compatible storage with zero egress fees, making it extremely cost-effective for content-heavy applications.
Why Choose R2?
- Zero egress fees - No charges for bandwidth
- S3-compatible API - Easy migration from S3
- Global CDN included - Fast content delivery
- Simple pricing - $0.015/GB/month storage only
Step 1: Enable R2
Sign in to Cloudflare Dashboard
Navigate to R2 Object Storage
Click "Create bucket"
Configure bucket:
- Name:
vyral-uploads
- Location: Automatic (global)
- Storage Class: Standard
Step 2: Configure Public Access
Go to bucket Settings → Public Access
Enable "Public Access"
Configure domain:
- Use R2.dev subdomain:
https://pub-xxx.r2.dev
- Or custom domain:
cdn.yourdomain.com
For custom domain:
CNAME cdn.yourdomain.com bucket.r2.cloudflarestorage.com
Step 3: Generate API Credentials
Go to R2 → Manage R2 API Tokens
Click "Create API Token"
Configure token:
- Name:
vyral-r2-token
- Permissions: Object Read & Write
- Bucket: Select your bucket
- TTL: Unlimited
Save credentials:
- Access Key ID
- Secret Access Key
- Endpoint URL
Step 4: Configure in Admin Dashboard
{
"provider": "r2",
"r2Config": {
"accessKey": "...",
"secretKey": "...",
"bucket": "vyral-uploads",
"endpoint": "https://accountid.r2.cloudflarestorage.com",
"cdnUrl": "https://pub-xxx.r2.dev"
}
}
Advanced Features
Worker Integration
// Cloudflare Worker for image optimization
addEventListener('fetch', event => {
event.respondWith(handleRequest(event.request))
})
async function handleRequest(request) {
const url = new URL(request.url)
const options = {
cf: {
image: {
width: url.searchParams.get('w'),
height: url.searchParams.get('h'),
quality: url.searchParams.get('q') || 85,
format: 'auto'
}
}
}
return fetch(request, options)
}
Cache Rules
- Set up Page Rules for caching
- Configure Cache Rules for different file types
- Use Transform Rules for URL rewriting
Migration from S3
# Using rclone for migration
rclone copy s3:old-bucket r2:new-bucket \
--transfers 32 \
--checkers 16 \
--fast-list
Vultr Object Storage Setup
Vultr Object Storage offers S3-compatible storage with predictable pricing and good global performance.
Pricing Advantages
- Fixed pricing: $5/month for 250GB
- Generous bandwidth: 1TB included
- No surprise bills: Predictable costs
- S3-compatible: Easy integration
Step 1: Create Object Storage
Sign in to Vultr Console
Go to Products → Object Storage
Click "Add Object Storage"
Select configuration:
- Cluster Location: Choose nearest region
- Label:
vyral-storage
- Plan: Start with 250GB ($5/month)
Deploy the storage cluster
Step 2: Create Bucket
Access Object Storage dashboard
Click "Create Bucket"
Configure bucket:
- Name:
vyral-uploads
- Access: Public read
- Versioning: Disabled
Step 3: Get Access Keys
Go to Object Storage → Access Keys
Click "Create Access Key"
Save credentials:
- Access Key
- Secret Key
- Cluster ID (e.g.,
ewr1
)
Step 4: Configure Bucket Policy
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::vyral-uploads/*"
}]
}
Step 5: Configure in Admin Dashboard
{
"provider": "vultr",
"vultrConfig": {
"accessKey": "...",
"secretKey": "...",
"bucket": "vyral-uploads",
"clusterId": "ewr1",
"cdnUrl": "https://ewr1.vultrobjects.com/vyral-uploads"
}
}
Performance Optimization
Enable Caching
# Nginx caching for Vultr Object Storage
location ~* \.(jpg|jpeg|png|gif|webp|svg|mp4|webm)$ {
proxy_pass https://ewr1.vultrobjects.com;
proxy_cache_valid 200 30d;
proxy_cache_bypass $http_cache_control;
add_header X-Cache-Status $upstream_cache_status;
expires 30d;
}
Multi-Region Setup
Deploy storage in multiple regions for better performance:
Region | Endpoint | Use Case |
---|---|---|
New Jersey | ewr1.vultrobjects.com | US East Coast |
Silicon Valley | sjc1.vultrobjects.com | US West Coast |
Amsterdam | ams1.vultrobjects.com | Europe |
Singapore | sgp1.vultrobjects.com | Asia Pacific |
Wasabi Storage Setup
Wasabi offers hot cloud storage at 1/5th the price of AWS S3 with no egress fees and faster performance.
Why Wasabi?
- 80% cheaper than S3: $6.99/TB/month
- No egress fees: Free data transfer
- No API charges: Unlimited requests
- Faster than S3: Parallel processing
- 11 nines durability: Enterprise-grade reliability
Step 1: Create Wasabi Account
Sign up at Wasabi Console
Complete account verification
Choose primary region:
- US East 1 (N. Virginia)
- US East 2 (N. Virginia)
- US Central 1 (Texas)
- US West 1 (Oregon)
- EU Central 1 (Amsterdam)
- AP Northeast 1 (Tokyo)
Step 2: Create Bucket
Go to Buckets → Create Bucket
Configure bucket:
- Bucket Name:
vyral-uploads
- Region: Your chosen region
- Versioning: Disabled
- Logging: Optional
Set bucket policy for public access:
{
"Version": "2012-10-17",
"Statement": [{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::vyral-uploads/*"
}]
}
Step 3: Create Access Keys
Go to Access Keys → Create New Access Key
Configure key:
- Name:
vyral-storage-key
- Type: Full Access (or custom policy)
Download or copy:
- Access Key
- Secret Key
Step 4: Configure in Admin Dashboard
{
"provider": "wasabi",
"wasabiConfig": {
"accessKey": "...",
"secretKey": "...",
"bucket": "vyral-uploads",
"region": "us-east-1",
"endpoint": "https://s3.wasabisys.com",
"cdnUrl": "https://s3.wasabisys.com/vyral-uploads"
}
}
Regional Endpoints
Region | Endpoint | Location |
---|---|---|
us-east-1 | s3.wasabisys.com | N. Virginia |
us-east-2 | s3.us-east-2.wasabisys.com | N. Virginia |
us-central-1 | s3.us-central-1.wasabisys.com | Texas |
us-west-1 | s3.us-west-1.wasabisys.com | Oregon |
eu-central-1 | s3.eu-central-1.wasabisys.com | Amsterdam |
ap-northeast-1 | s3.ap-northeast-1.wasabisys.com | Tokyo |
Performance Features
Parallel Upload
// Multipart upload for large files
const uploadLargeFile = async (file) => {
const partSize = 5 * 1024 * 1024; // 5MB chunks
const parts = Math.ceil(file.size / partSize);
// Initiate multipart upload
const { UploadId } = await s3.createMultipartUpload({
Bucket: 'vyral-uploads',
Key: file.name
}).promise();
// Upload parts in parallel
const uploadPromises = [];
for (let i = 0; i < parts; i++) {
const start = i * partSize;
const end = Math.min(start + partSize, file.size);
const blob = file.slice(start, end);
uploadPromises.push(
s3.uploadPart({
Bucket: 'vyral-uploads',
Key: file.name,
PartNumber: i + 1,
UploadId,
Body: blob
}).promise()
);
}
await Promise.all(uploadPromises);
};
Cost Comparison
Provider | Storage | Egress | API Calls | 1TB/month Total |
---|---|---|---|---|
Wasabi | $6.99 | $0 | $0 | $6.99 |
AWS S3 | $23.00 | $90.00 | $5.00 | $118.00 |
Azure | $20.00 | $87.00 | $5.00 | $112.00 |
Google Cloud Storage Setup
Google Cloud Storage offers excellent integration with Google's AI/ML services and competitive pricing with intelligent lifecycle management.
Storage Classes
Class | Use Case | Price/GB | Retrieval |
---|---|---|---|
Standard | Frequently accessed | $0.020 | Instant |
Nearline | Accessed once/month | $0.010 | Instant |
Coldline | Accessed once/quarter | $0.004 | Instant |
Archive | Yearly access | $0.0012 | Instant |
Step 1: Create GCS Project
Go to Google Cloud Console
Create new project or select existing
Enable Cloud Storage API
Enable billing for the project
Step 2: Create Storage Bucket
Navigate to Cloud Storage → Buckets
Click "Create Bucket"
Configure bucket:
- Name:
vyral-uploads-prod
- Location Type: Multi-region (for redundancy)
- Location: US (or your preferred region)
- Storage Class: Standard
- Access Control: Uniform
- Encryption: Google-managed
Step 3: Configure Public Access
Go to bucket Permissions tab
Click "Grant Access"
Add member:
- New principals:
allUsers
- Role: Storage Object Viewer
Confirm public access warning
Step 4: Create Service Account
Go to IAM & Admin → Service Accounts
Click "Create Service Account"
Configure account:
- Name:
vyral-storage-service
- ID:
vyral-storage-service
- Description: Vyral platform storage access
Grant roles:
- Storage Object Admin
- Storage Object Creator
Create JSON key:
- Click on service account
- Keys tab → Add Key → Create new key
- Choose JSON format
- Download and secure the key file
Step 5: Configure in Admin Dashboard
{
"provider": "gcs",
"gcsConfig": {
"projectId": "your-project-id",
"bucket": "vyral-uploads-prod",
"keyFile": "/path/to/service-account.json",
"credentialsJson": "{...}", // Or paste JSON contents
"cdnUrl": "https://storage.googleapis.com/vyral-uploads-prod"
}
}
Advanced Features
Lifecycle Management
{
"lifecycle": {
"rule": [{
"action": {
"type": "SetStorageClass",
"storageClass": "NEARLINE"
},
"condition": {
"age": 30
}
}, {
"action": {
"type": "SetStorageClass",
"storageClass": "COLDLINE"
},
"condition": {
"age": 90
}
}, {
"action": {
"type": "Delete"
},
"condition": {
"age": 365
}
}]
}
}
Cloud CDN Integration
Create Load Balancer with Cloud CDN
Configure backend bucket
Enable Cloud CDN
Set cache modes and TTL
Image Processing with Cloud Functions
// Cloud Function for automatic image optimization
exports.optimizeImage = async (file, context) => {
const bucket = storage.bucket(file.bucket);
const fileName = file.name;
// Download original
const tempFilePath = `/tmp/${fileName}`;
await bucket.file(fileName).download({destination: tempFilePath});
// Optimize with Sharp
await sharp(tempFilePath)
.resize(1920, 1080, {fit: 'inside', withoutEnlargement: true})
.jpeg({quality: 85, progressive: true})
.toFile(`${tempFilePath}_optimized`);
// Upload optimized version
await bucket.upload(`${tempFilePath}_optimized`, {
destination: `optimized/${fileName}`
});
};
Azure Blob Storage Setup
Azure Blob Storage provides enterprise-grade storage with excellent integration into Microsoft's ecosystem and advanced security features.
Storage Tiers
Tier | Use Case | Price/GB | Access |
---|---|---|---|
Hot | Frequently accessed | $0.0184 | Instant |
Cool | Infrequent (30+ days) | $0.01 | Instant |
Archive | Rare access (180+ days) | $0.00099 | Hours |
Step 1: Create Storage Account
Sign in to Azure Portal
Create a resource → Storage → Storage account
Configure basics:
- Subscription: Select your subscription
- Resource group: Create new or select existing
- Storage account name:
vyralstorageaccount
- Region: Choose closest to users
- Performance: Standard
- Redundancy: LRS (or higher for production)
Configure advanced:
- Security: Enable secure transfer
- Data Lake Storage Gen2: Disabled
- Blob access tier: Hot
- Azure Files: Not needed
Review and create
Step 2: Create Container
Go to Storage Account → Containers
Click "+ Container"
Configure container:
- Name:
uploads
- Public access level: Blob (anonymous read)
Step 3: Get Access Keys
Go to Storage Account → Access keys
Copy:
- Storage account name
- Key1 or Key2
- Connection string
Step 4: Configure CORS (for browser uploads)
Go to Resource sharing (CORS)
Add CORS rule:
{
"allowedOrigins": ["*"],
"allowedMethods": ["GET", "POST", "PUT", "DELETE", "OPTIONS"],
"allowedHeaders": ["*"],
"exposedHeaders": ["*"],
"maxAgeInSeconds": 3600
}
Step 5: Configure in Admin Dashboard
{
"provider": "azure_blob",
"azureConfig": {
"accountName": "vyralstorageaccount",
"accountKey": "...",
"containerName": "uploads",
"cdnUrl": "https://vyralstorageaccount.blob.core.windows.net/uploads"
}
}
Azure CDN Integration
Create CDN Profile:
- Go to CDN profiles → Create
- Choose pricing tier (Microsoft Standard recommended)
Create CDN Endpoint:
- Name:
vyral-cdn
- Origin type: Storage
- Origin hostname: Select your storage account
Configure caching rules:
- Images: Cache for 7 days
- Videos: Cache for 30 days
- Documents: Cache for 1 day
Update CDN URL in config:
"cdnUrl": "https://vyral-cdn.azureedge.net/uploads"
Advanced Features
Lifecycle Management
{
"rules": [{
"name": "ArchiveOldFiles",
"enabled": true,
"type": "Lifecycle",
"definition": {
"filters": {
"blobTypes": ["blockBlob"],
"prefixMatch": ["uploads/"]
},
"actions": {
"baseBlob": {
"tierToCool": {
"daysAfterModificationGreaterThan": 30
},
"tierToArchive": {
"daysAfterModificationGreaterThan": 90
},
"delete": {
"daysAfterModificationGreaterThan": 365
}
}
}
}
}]
}
SAS Token Generation
// Generate SAS token for secure uploads
string GenerateSasToken(string containerName)
{
var sasBuilder = new BlobSasBuilder
{
BlobContainerName = containerName,
Resource = "c",
StartsOn = DateTimeOffset.UtcNow,
ExpiresOn = DateTimeOffset.UtcNow.AddHours(1)
};
sasBuilder.SetPermissions(BlobSasPermissions.Write | BlobSasPermissions.Create);
return sasBuilder.ToSasQueryParameters(
new StorageSharedKeyCredential(accountName, accountKey)
).ToString();
}
DigitalOcean Spaces Setup
DigitalOcean Spaces offers simple, affordable object storage with built-in CDN and S3 compatibility at a fixed price.
Pricing Structure
- Base Plan: $5/month
- 250GB storage
- 1TB bandwidth
- Unlimited uploads
- Additional Storage: $0.02/GB
- Additional Bandwidth: $0.01/GB
- CDN Included: No extra charge
Step 1: Create a Space
Sign in to DigitalOcean Control Panel
Navigate to Spaces
Click "Create a Space"
Configure Space:
- Region: Choose closest to users
- NYC3 (New York)
- SFO3 (San Francisco)
- AMS3 (Amsterdam)
- SGP1 (Singapore)
- FRA1 (Frankfurt)
- Enable CDN: Yes (recommended)
- Space name:
vyral-uploads
- Project: Default or create new
Configure file listing:
- File Listing: Enable (for public access)
- CORS: Will configure later
Step 2: Generate Access Keys
Go to API → Spaces Keys
Click "Generate New Key"
Configure key:
- Name:
vyral-spaces-key
- Expiry: Never
Save credentials:
- Access Key
- Secret Key
Step 3: Configure CORS
Go to your Space → Settings → CORS
Add CORS configuration:
<?xml version="1.0" encoding="UTF-8"?>
<CORSConfiguration>
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>POST</AllowedMethod>
<AllowedMethod>PUT</AllowedMethod>
<AllowedMethod>DELETE</AllowedMethod>
<AllowedHeader>*</AllowedHeader>
<MaxAgeSeconds>3000</MaxAgeSeconds>
</CORSRule>
</CORSConfiguration>
Step 4: Configure in Admin Dashboard
{
"provider": "digitalocean",
"doConfig": {
"accessKey": "...",
"secretKey": "...",
"spaceName": "vyral-uploads",
"region": "nyc3",
"cdnUrl": "https://vyral-uploads.nyc3.cdn.digitaloceanspaces.com"
}
}
CDN Endpoints
Region | Space URL | CDN URL |
---|---|---|
NYC3 | vyral-uploads.nyc3.digitaloceanspaces.com | vyral-uploads.nyc3.cdn.digitaloceanspaces.com |
SFO3 | vyral-uploads.sfo3.digitaloceanspaces.com | vyral-uploads.sfo3.cdn.digitaloceanspaces.com |
AMS3 | vyral-uploads.ams3.digitaloceanspaces.com | vyral-uploads.ams3.cdn.digitaloceanspaces.com |
SGP1 | vyral-uploads.sgp1.digitaloceanspaces.com | vyral-uploads.sgp1.cdn.digitaloceanspaces.com |
FRA1 | vyral-uploads.fra1.digitaloceanspaces.com | vyral-uploads.fra1.cdn.digitaloceanspaces.com |
Custom Domain Setup
Go to Space → Settings → CDN
Add custom subdomain: cdn.yourdomain.com
Add CNAME record to your DNS:
CNAME cdn.yourdomain.com vyral-uploads.nyc3.cdn.digitaloceanspaces.com
Enable SSL certificate (automatic via Let's Encrypt)
Optimization Tips
Lifecycle Rules
{
"Rules": [{
"ID": "DeleteOldFiles",
"Status": "Enabled",
"Expiration": {
"Days": 365
}
}, {
"ID": "DeleteIncompleteMultipartUploads",
"Status": "Enabled",
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}]
}
Direct Upload from Browser
// Generate presigned URL for browser upload
const aws = require('aws-sdk');
const spacesEndpoint = new aws.Endpoint('nyc3.digitaloceanspaces.com');
const s3 = new aws.S3({
endpoint: spacesEndpoint,
accessKeyId: 'DO_SPACES_KEY',
secretAccessKey: 'DO_SPACES_SECRET'
});
const presignedUrl = s3.getSignedUrl('putObject', {
Bucket: 'vyral-uploads',
Key: 'path/to/file.jpg',
Expires: 300, // 5 minutes
ContentType: 'image/jpeg'
});
Local Storage Setup
Local storage is only recommended for development and testing. For production, use a cloud storage provider.
Configuration
{
"provider": "local",
"localConfig": {
"uploadDir": "./uploads",
"publicUrl": "/uploads"
}
}
Directory Structure
uploads/
├── images/
│ ├── profiles/
│ ├── posts/
│ └── thumbnails/
├── videos/
│ ├── original/
│ └── processed/
├── audio/
└── documents/
Nginx Configuration
server {
location /uploads {
alias /var/www/vyral/uploads;
expires 30d;
add_header Cache-Control "public, immutable";
# Security headers
add_header X-Content-Type-Options nosniff;
add_header X-Frame-Options DENY;
# CORS headers
add_header Access-Control-Allow-Origin *;
# Gzip compression
gzip on;
gzip_types image/svg+xml application/javascript text/css;
}
}
Backup Strategy
#!/bin/bash
# Daily backup script
BACKUP_DIR="/backups/vyral"
UPLOAD_DIR="/var/www/vyral/uploads"
DATE=$(date +%Y%m%d)
# Create backup
tar -czf "$BACKUP_DIR/uploads_$DATE.tar.gz" "$UPLOAD_DIR"
# Upload to S3 (optional)
aws s3 cp "$BACKUP_DIR/uploads_$DATE.tar.gz" s3://vyral-backups/
# Remove old backups (keep 30 days)
find "$BACKUP_DIR" -name "uploads_*.tar.gz" -mtime +30 -delete
File Upload Configuration
Size Limits
Configure maximum file sizes for different types:
// config/upload.js
module.exports = {
limits: {
image: 10 * 1024 * 1024, // 10MB
video: 500 * 1024 * 1024, // 500MB
audio: 50 * 1024 * 1024, // 50MB
document: 25 * 1024 * 1024, // 25MB
avatar: 5 * 1024 * 1024, // 5MB
thumbnail: 2 * 1024 * 1024, // 2MB
},
};
Allowed File Types
// config/mimetypes.js
module.exports = {
image: [
"image/jpeg",
"image/png",
"image/gif",
"image/webp",
"image/svg+xml",
],
video: ["video/mp4", "video/webm", "video/quicktime", "video/x-msvideo"],
audio: ["audio/mpeg", "audio/wav", "audio/webm", "audio/aac", "audio/ogg"],
document: [
"application/pdf",
"application/msword",
"application/vnd.openxmlformats-officedocument.wordprocessingml.document",
"text/plain",
],
};
Security Best Practices
1. File Validation
// Validate file before upload
const validateFile = (file) => {
// Check file size
if (file.size > config.limits[file.type]) {
throw new Error('File too large');
}
// Check MIME type
if (!config.mimetypes[file.type].includes(file.mimetype)) {
throw new Error('Invalid file type');
}
// Check file extension
const ext = path.extname(file.name).toLowerCase();
if (!config.allowedExtensions.includes(ext)) {
throw new Error('Invalid file extension');
}
// Scan for malware (optional)
if (config.enableVirusScan) {
await scanFile(file);
}
return true;
};
2. Access Control
// Generate signed URLs for private files
const generateSignedUrl = (key, expiresIn = 3600) => {
const params = {
Bucket: config.bucket,
Key: key,
Expires: expiresIn
};
return s3.getSignedUrlPromise('getObject', params);
};
// Restrict access by user
const checkFileAccess = (userId, fileId) => {
const file = await File.findById(fileId);
if (file.isPublic) return true;
if (file.ownerId === userId) return true;
if (file.sharedWith.includes(userId)) return true;
throw new Error('Access denied');
};
3. Content Security
# Nginx security headers for uploads
location /uploads {
# Prevent XSS
add_header X-Content-Type-Options nosniff;
add_header Content-Security-Policy "default-src 'none'; img-src 'self'; media-src 'self'";
# Prevent clickjacking
add_header X-Frame-Options DENY;
# Force download for certain types
location ~* \.(html|htm|js|json|xml)$ {
add_header Content-Disposition "attachment";
}
}
Performance Optimization
Image Optimization
// Automatic image optimization on upload
const sharp = require("sharp");
const optimizeImage = async (inputPath, outputPath) => {
await sharp(inputPath)
.resize(2048, 2048, {
fit: "inside",
withoutEnlargement: true,
})
.jpeg({
quality: 85,
progressive: true,
})
.toFile(outputPath);
};
// Generate responsive thumbnails
const generateThumbnails = async (imagePath) => {
const sizes = [
{ width: 150, height: 150, suffix: "thumb" },
{ width: 320, height: 320, suffix: "small" },
{ width: 640, height: 640, suffix: "medium" },
{ width: 1280, height: 1280, suffix: "large" },
];
for (const size of sizes) {
await sharp(imagePath)
.resize(size.width, size.height, {
fit: "cover",
position: "center",
})
.toFile(`${imagePath}_${size.suffix}.jpg`);
}
};
Video Processing
// Video thumbnail generation
const ffmpeg = require("fluent-ffmpeg");
const generateVideoThumbnail = (videoPath, outputPath) => {
return new Promise((resolve, reject) => {
ffmpeg(videoPath)
.screenshots({
timestamps: ["10%"],
filename: "thumbnail.jpg",
folder: outputPath,
size: "640x360",
})
.on("end", resolve)
.on("error", reject);
});
};
// Video compression
const compressVideo = (inputPath, outputPath) => {
return new Promise((resolve, reject) => {
ffmpeg(inputPath)
.outputOptions([
"-c:v libx264",
"-crf 23",
"-preset medium",
"-c:a aac",
"-b:a 128k",
])
.output(outputPath)
.on("end", resolve)
.on("error", reject)
.run();
});
};
CDN Integration
// CDN URL generation
const getCdnUrl = (path) => {
if (!config.cdnUrl) {
return `${config.baseUrl}${path}`;
}
// Add image transformation parameters
const transforms = new URLSearchParams({
w: 800,
h: 600,
fit: "cover",
q: 85,
});
return `${config.cdnUrl}/${path}?${transforms}`;
};
Monitoring and Analytics
Storage Metrics
Monitor these key metrics:
Metric | Description | Alert Threshold |
---|---|---|
Storage Used | Total storage consumption | > 80% of limit |
Bandwidth Used | Monthly bandwidth usage | > 90% of limit |
Request Rate | API requests per second | > 1000/sec |
Error Rate | Failed upload percentage | > 5% |
Response Time | Average upload duration | > 5 seconds |
Cost Monitoring
// Track storage costs
const calculateMonthlyCost = async () => {
const metrics = await getStorageMetrics();
const costs = {
storage: metrics.totalSize * config.pricePerGB,
bandwidth: metrics.bandwidth * config.pricePerGB,
requests: metrics.requests * config.pricePerRequest,
total: 0,
};
costs.total = costs.storage + costs.bandwidth + costs.requests;
return costs;
};
// Alert on cost overruns
if (costs.total > config.budgetLimit) {
sendAlert("Storage costs exceeding budget", costs);
}
Migration Guide
Migrating Between Providers
Prepare New Provider
- Set up account and bucket
- Configure access permissions
- Test upload functionality
Export File List
SELECT file_path, file_size, created_at
FROM files
WHERE storage_provider = 'old_provider'
ORDER BY created_at DESC;
Transfer Files
# Using rclone for bulk transfer
rclone copy old-provider:bucket new-provider:bucket \
--transfers 32 \
--checkers 16 \
--progress \
--fast-list
Update Database
UPDATE files
SET storage_provider = 'new_provider',
file_url = REPLACE(file_url, 'old-domain', 'new-domain')
WHERE storage_provider = 'old_provider';
Verify Migration
- Test random sample of files
- Check file accessibility
- Verify CDN propagation
- Monitor error logs
Rollback Plan
// Dual-write during migration
const uploadFile = async (file) => {
try {
// Upload to new provider
const newUrl = await uploadToNewProvider(file);
// Also upload to old provider (backup)
const oldUrl = await uploadToOldProvider(file);
// Store both URLs
await saveFileRecord({
primaryUrl: newUrl,
backupUrl: oldUrl,
migrationStatus: "dual-write",
});
} catch (error) {
// Fallback to old provider
return uploadToOldProvider(file);
}
};
Troubleshooting
Common Issues
Upload Failed
Symptoms: Files fail to upload with 403 or 500 errors
Solutions:
- Check storage provider credentials
- Verify bucket permissions
- Ensure CORS is configured
- Check file size limits
- Review error logs for details
Files Not Accessible
Symptoms: Uploaded files return 404 or 403
Solutions:
- Verify public access settings
- Check CDN configuration
- Ensure correct URL format
- Clear CDN cache
- Check bucket policy
Slow Upload Speed
Symptoms: Uploads take too long
Solutions:
- Use multipart upload for large files
- Choose closer region
- Enable transfer acceleration
- Optimize file size before upload
- Use parallel uploads
High Storage Costs
Symptoms: Unexpected billing charges
Solutions:
- Implement lifecycle policies
- Delete unused files
- Compress files before storage
- Use appropriate storage class
- Monitor bandwidth usage
Next Steps
Storage Configuration Complete!
Your file storage is now configured. Continue with:
Test Your Configuration
- Upload test files of various types
- Verify CDN delivery
- Check access permissions
Set Up Monitoring
- Configure usage alerts
- Set up cost monitoring
- Enable access logs
Configure Other Services
- Email Configuration - Transactional emails
- Branding - Platform customization
- Localization - Multi-language support
- CoTURN Server - Video calling