Anuzoo Server Documentation
Complete guide to deploying and managing Anuzoo production infrastructure on Vultr
🏗️ Recommended Architecture
📱 Application Server
🗄️ Database Server
📚 Documentation Structure
🎯 Getting Started
- Prerequisites & API keys
- Vultr account setup
- Server provisioning
- Quick deploy guide
🏗️ Infrastructure
- Deployment options comparison
- 2-server setup guide
- Multi-app hosting
- Cost analysis
🚀 Deployment
- Step-by-step deployment
- Migration strategies
- Rollback procedures
- Troubleshooting
📊 Operations
- Monitoring & alerts
- Backup strategies
- Performance tuning
- Security audits
🚦 Quick Navigation
Start Here: Getting Started
New to deployment? Begin with the Getting Started guide to gather prerequisites and provision servers.
Choose Your Architecture
Review Deployment Options to select the right infrastructure for your needs (single server vs multi-server).
Deploy Your Application
Follow the Quick Deploy or 2-Server Setup guide for step-by-step deployment instructions.
Configure Operations
Set up monitoring, backups, and streaming features for production-ready infrastructure.
Getting Started - Anuzoo Production Deployment
Goal: Deploy Anuzoo to production on Vultr's 2-server infrastructure Time: 4-5 hours Cost: $158.40/month
---
🎯 Quick Start Checklist
Before you begin, complete these tasks:
✅ Pre-Deployment Checklist
- [ ] 1. Review deployment options - Confirm 2-server is right choice
- [ ] 2. Gather API keys - Stripe, Pinecone, OpenAI, SendGrid
- [ ] 3. Create Vultr account - Sign up and add payment method
- [ ] 4. Prepare domain - Have domain ready or purchase one
- [ ] 5. Generate SSH keys - For secure server access
- [ ] 6. Local repository ready - Code tested and committed
Estimated time: 1-2 hours
---
📖 Step-by-Step Guide
Step 1: Gather Required API Keys (30 minutes)
You'll need these API keys before deployment. Get them now:
1.1 Stripe (Payment Processing) - REQUIRED
1. Go to https://dashboard.stripe.com 2. Create account (or sign in) 3. Get Test Keys (for initial deployment):
- Click "Developers" → "API keys"
- Copy Publishable key:
pk_test_... - Copy Secret key:
sk_test_...
- Click "Developers" → "Webhooks"
- Click "Add endpoint"
- URL:
https://yourdomain.com/api/webhooks/stripe(you'll update this later) - Select events:
checkout.session.completed,payment_intent.succeeded - Copy Webhook signing secret:
whsec_...
Save these in a secure location (password manager).
1.2 Pinecone (AI Vector Database) - REQUIRED
1. Go to https://app.pinecone.io
2. Sign up for free account
3. Click "API Keys" → Copy your API key
4. Note your environment (e.g., us-west1-gcp)
5. Create Index:
- Click "Indexes" → "Create Index"
- Name:
anuzoo-embeddings - Dimensions:
512 - Metric:
cosine - Click "Create Index"
Save API key and environment.
1.3 OpenAI (AI Embeddings) - REQUIRED
1. Go to https://platform.openai.com
2. Sign up and add payment method (pay-as-you-go)
3. Click "API keys" → "Create new secret key"
4. Name it "Anuzoo Production"
5. Copy the key (shown only once): sk-...
Save immediately - cannot retrieve later.
1.4 SendGrid (Email Service) - REQUIRED
1. Go to https://app.sendgrid.com
2. Sign up for free account (100 emails/day free)
3. Click "Settings" → "API Keys" → "Create API Key"
4. Name: "Anuzoo Production"
5. Permissions: "Full Access"
6. Copy API key: SG....
7. Sender Authentication:
- Click "Settings" → "Sender Authentication"
- Choose "Domain Authentication" (recommended) or "Single Sender"
- Follow verification steps
Save API key.
1.5 Generate Security Secrets
Run these commands on your local machine:
SECRET_KEY
openssl rand -hex 32JWT_SECRET_KEY
openssl rand -hex 32SESSION_SECRET_KEY
openssl rand -hex 32
Save all three outputs.
---
Step 2: Create Vultr Account (15 minutes)
2.1 Sign Up
1. Go to https://vultr.com 2. Click "Create Account" 3. Enter email and password 4. Verify email address
2.2 Add Payment Method
1. Click "Billing" → "Payment Methods" 2. Add credit card or PayPal 3. Optional: Add promotional code if you have one
2.3 Generate SSH Key (if you don't have one)
On Windows (PowerShell):
Generate SSH key
ssh-keygen -t ed25519 -C "your-email@example.com"Save to default location: C:\Users\YourName\.ssh\id_ed25519
Set a passphrase (recommended)
Copy public key to clipboard
Get-Content $env:USERPROFILE\.ssh\id_ed25519.pub | Set-Clipboard
On Mac/Linux:
Generate SSH key
ssh-keygen -t ed25519 -C "your-email@example.com"Copy public key to clipboard
cat ~/.ssh/id_ed25519.pub | pbcopy # Mac
cat ~/.ssh/id_ed25519.pub | xclip # Linux
2.4 Add SSH Key to Vultr
1. In Vultr dashboard, click "Account" → "SSH Keys" 2. Click "Add SSH Key" 3. Paste your public key (from clipboard) 4. Name: "My Laptop" or "Work Computer" 5. Click "Add SSH Key"
---
Step 3: Provision Database Server (15 minutes)
3.1 Deploy Database Server
1. In Vultr Dashboard, click "Deploy +" → "Deploy New Server"
2. Choose Server Type:
- Select: Cloud Compute
3. Choose Location:
- Recommended: Ashburn, VA (IAD) or closest to your users
- Note: Both servers MUST be in same location for private networking
4. Choose OS:
- Select: Ubuntu 22.04 LTS x64
5. Choose Plan:
- Click "High Performance AMD" tab
- Select: 4 vCPU, 8GB RAM, 160GB SSD - $48/month
6. Additional Features:
- ✅ Enable Auto Backups (+$4.80/month)
- ✅ Enable DDOS Protection (free)
- ✅ Enable Private Networking (free) ⭐ IMPORTANT
- ✅ Enable IPv6 (free)
7. Server Settings:
- Hostname:
anuzoo-db-01 - Label:
Anuzoo Database Server - SSH Key: Select the key you added earlier
8. Click "Deploy Now"
9. Wait 60-90 seconds for provisioning
10. Note the IP addresses:
- Public IP:
___________________(for SSH access) - Private IP:
___________________(for database connections) - Save these - you'll need them!
3.2 Verify Server Access
SSH into database server (replace with YOUR public IP)
ssh root@YOUR_DB_SERVER_PUBLIC_IPYou should see Ubuntu welcome message
Type 'exit' to disconnect for now
If SSH fails: Check firewall settings, wait a few more minutes, or verify SSH key.
---
Step 4: Provision Application Server (15 minutes)
4.1 Deploy Application Server
Repeat same process as database server with different specs:
1. In Vultr Dashboard, click "Deploy +" → "Deploy New Server" 2. Server Type: Cloud Compute 3. Location: SAME as database server (e.g., Ashburn, VA) ⭐ CRITICAL 4. OS: Ubuntu 22.04 LTS x64 5. Plan: High Performance AMD → 6 vCPU, 16GB RAM, 320GB SSD - $96/month 6. Additional Features:
- ✅ Enable Auto Backups (+$9.60/month)
- ✅ Enable DDOS Protection (free)
- ✅ Enable Private Networking (free) ⭐ IMPORTANT
- ✅ Enable IPv6 (free)
- Hostname:
anuzoo-app-01 - Label:
Anuzoo Application Server - SSH Key: Same key as before
- Public IP:
___________________(for SSH and web traffic) - Private IP:
___________________(for connecting to database)
4.2 Verify Server Access
SSH into application server
ssh root@YOUR_APP_SERVER_PUBLIC_IPType 'exit' to disconnect
---
Step 5: Verify Private Networking (5 minutes)
Test that servers can communicate via private network:
5.1 From Application Server, Ping Database Server
SSH into application server
ssh root@YOUR_APP_SERVER_PUBLIC_IPPing database server's PRIVATE IP (10.x.x.x)
ping -c 3 DATABASE_SERVER_PRIVATE_IPShould see replies like:
64 bytes from 10.x.x.x: icmp_seq=1 ttl=64 time=0.5 ms
Exit
exit
✅ Success: If you get replies, private networking is working! ❌ Failed: If "Destination Host Unreachable", check:
- Both servers in same datacenter?
- Private networking enabled on both?
- Wait 5 minutes and try again (takes time to provision)
---
Step 6: Configure DNS (Optional - Can do later)
If you have a domain, configure DNS now:
1. Log into your domain registrar (Namecheap, GoDaddy, etc.) 2. Go to DNS settings 3. Add these A records:
| Type | Name | Value | TTL | |------|------|-------|-----| | A | @ | YOUR_APP_SERVER_PUBLIC_IP | 600 | | A | www | YOUR_APP_SERVER_PUBLIC_IP | 600 | | A | api | YOUR_APP_SERVER_PUBLIC_IP | 600 |
Wait 5-30 minutes for DNS propagation.
Test DNS:
nslookup yourdomain.com
Should show your server's public IP
Don't have a domain yet? No problem! You can:
- Deploy using IP address initially
- Add domain later
- Use Vultr's temporary URL for testing
---
✅ Preparation Complete!
You now have:
- ✅ All API keys gathered
- ✅ Vultr account created
- ✅ Database server provisioned
- ✅ Application server provisioned
- ✅ Private networking verified
- ✅ DNS configured (optional)
Server Summary:
| Server | Type | IP | Private IP | Cost |
|--------|------|-----|-----------|------|
| Database | 4 vCPU, 8GB RAM | ______ | ______ | $52.80/mo |
| Application | 6 vCPU, 16GB RAM | ______ | ______ | $105.60/mo |
| Total | | | | $158.40/mo |
---
🚀 Next Steps: Deploy the Application
Now you're ready to deploy! Choose your path:
Option A: Automated Deployment (Recommended for beginners)
Use the automated deployment script:
SSH into application server
ssh root@YOUR_APP_SERVER_PUBLIC_IPDownload deployment script
curl -o deploy.sh https://raw.githubusercontent.com/yourusername/anuzoo/main/scripts/setup-production.shMake executable
chmod +x deploy.shRun automated setup
./deploy.sh
The script will: 1. Install Docker and dependencies 2. Clone repository 3. Configure environment variables (will prompt for API keys) 4. Deploy database server 5. Deploy application server 6. Configure NGINX 7. Setup SSL certificates 8. Configure monitoring
Time: 30 minutes (mostly automated)
Option B: Manual Deployment (Recommended for learning)
Follow the detailed step-by-step guide:
See MANUAL_DEPLOYMENT.md for complete walkthrough of every command.
Time: 2-3 hours (you control every step)
Option C: Guided Interactive Deployment
Use the interactive deployment wizard:
SSH into application server
ssh root@YOUR_APP_SERVER_PUBLIC_IPDownload and run wizard
curl -o wizard.sh https://raw.githubusercontent.com/yourusername/anuzoo/main/scripts/deployment-wizard.sh
chmod +x wizard.sh
./wizard.sh
The wizard will:
- Ask questions about your setup
- Validate inputs
- Show you commands before running
- Provide clear progress indicators
Time: 1-2 hours (guided but manual)
---
📚 Reference Documentation
| Document | Purpose | |----------|---------| | MULTI_SERVER_SETUP.md | Complete 2-server setup guide | | DEPLOYMENT_OPTIONS.md | Compare all deployment architectures | | VULTR_ACCOUNT_SETUP.md | Detailed Vultr account setup | | DEPLOYMENT_CHECKLIST.md | Step-by-step deployment checklist | | MULTI_APP_HOSTING.md | Host multiple apps on same infrastructure | | MIGRATION_SINGLE_TO_MULTI.md | Migrate from single to 2-server |
---
💡 Quick Tips
First time deploying to production?
- Choose Option A: Automated Deployment - fastest and easiest
- Keep this guide open in browser while deploying
- Have API keys ready in text file for copy/paste
- Deploy during low-traffic hours (just in case)
Experienced with servers?
- Choose Option B: Manual Deployment - full control
- Customize as needed for your setup
- Review each command before running
Want to learn and understand?
- Choose Option C: Interactive Wizard - best of both worlds
- See what's happening at each step
- Easy to troubleshoot if issues arise
---
🆘 Need Help?
Common Issues:
1. Can't SSH into server
- Wait 5 minutes (server still provisioning)
- Check SSH key was added correctly
- Try password from Vultr dashboard
2. Private networking not working
- Both servers in same datacenter?
- Private networking enabled on both?
- Wait 10 minutes and try again
3. Don't have all API keys yet
- You can deploy infrastructure first
- Add API keys later via environment variables
- Some features won't work until keys added
4. Domain not resolving
- DNS takes 5-30 minutes to propagate
- Can deploy using IP address temporarily
- Add domain/SSL later
Documentation:
- Full setup guide:
MULTI_SERVER_SETUP.md - Troubleshooting: See "Troubleshooting" sections in guides
- Rollback:
MIGRATION_SINGLE_TO_MULTI.mdhas rollback procedures
---
📊 What You'll Have After Deployment
Infrastructure:
- ✅ 2 Vultr servers (database + application)
- ✅ Private networking between servers
- ✅ Automated backups (daily)
- ✅ SSL/HTTPS encryption
- ✅ NGINX reverse proxy
- ✅ Docker containerization
- ✅ Monitoring and health checks
Services Running:
- ✅ PostgreSQL database (dedicated server)
- ✅ Redis cache
- ✅ FastAPI backend
- ✅ AI detection models
- ✅ React frontend
Capacity:
- 👥 30,000-80,000 users
- 📊 Up to 120GB database
- 🚀 80 requests/minute per user
- 💾 Up to 250GB uploads
Cost:
- 💰 $158.40/month
- 💾 Includes automated backups
- 🔄 Can scale up or down anytime
---
🎉 Ready to Deploy?
You have everything you need! Choose your deployment method:
- Quick & Easy: Follow Option A above
- Full Control: See
MULTI_SERVER_SETUP.md - Complete Checklist: See
DEPLOYMENT_CHECKLIST.md
Good luck with your deployment! 🚀
Quick Deploy Guide - 2-Server Production Setup
Prerequisites: Servers provisioned, API keys gathered Time: 1-2 hours Difficulty: Beginner-friendly
---
🚀 Deployment Steps
Part 1: Database Server Setup (30 minutes)
1. SSH into Database Server
ssh root@YOUR_DB_SERVER_PUBLIC_IP
2. Update System
Update package list
apt-get update && apt-get upgrade -yInstall essential tools
apt-get install -y curl wget git vim htop net-tools
3. Install Docker
Install Docker (official installation script)
curl -fsSL https://get.docker.com | shVerify Docker installed
docker --version
Should show: Docker version 24.x.x
Start Docker
systemctl enable docker
systemctl start docker
4. Clone Repository
Go to root directory
cd /rootClone repository (replace with your repo URL)
git clone https://github.com/YOUR_USERNAME/anuzoo.gitOr if private repo, use token:
git clone https://YOUR_GITHUB_TOKEN@github.com/YOUR_USERNAME/anuzoo.git
Enter directory
cd anuzoo
5. Configure Environment
Create production environment file
nano .env.production
Paste this content (update with YOUR values):
Database Configuration
POSTGRES_USER=anuzoo_user
POSTGRES_PASSWORD=CHANGE_THIS_TO_STRONG_PASSWORD
POSTGRES_DB=anuzoo_prodPerformance Settings (optimized for 8GB RAM server)
POSTGRES_SHARED_BUFFERS=2GB
POSTGRES_EFFECTIVE_CACHE_SIZE=6GB
POSTGRES_MAINTENANCE_WORK_MEM=512MB
POSTGRES_WORK_MEM=64MB
POSTGRES_MAX_WORKER_PROCESSES=4
Save: Press Ctrl+X, then Y, then Enter
Important: Change POSTGRES_PASSWORD to a strong password!
6. Create Data Directory
Create directory for PostgreSQL data
mkdir -p /var/lib/anuzoo/postgresSet permissions
chmod 700 /var/lib/anuzoo/postgres
7. Mark Server Type
Mark this as database server
echo "db" > /etc/anuzoo-server-type
8. Start Database
Make deployment script executable
chmod +x scripts/deploy-multi-server.shStart database server
./scripts/deploy-multi-server.sh db startWait for startup (30 seconds)
sleep 30
9. Verify Database Health
Check health
./scripts/deploy-multi-server.sh db healthShould see:
✓ PostgreSQL is healthy
Active database connections: X
✅ Database Server Complete!
---
Part 2: Application Server Setup (45 minutes)
1. SSH into Application Server (NEW TERMINAL)
ssh root@YOUR_APP_SERVER_PUBLIC_IP
2. Update System
Update packages
apt-get update && apt-get upgrade -yInstall tools
apt-get install -y curl wget git vim htop net-tools nginx certbot python3-certbot-nginx
3. Install Docker
Install Docker
curl -fsSL https://get.docker.com | shVerify
docker --versionStart Docker
systemctl enable docker
systemctl start docker
4. Clone Repository
cd /root
git clone https://github.com/YOUR_USERNAME/anuzoo.git
cd anuzoo
5. Configure Environment
nano .env.production
Paste this (update ALL values with YOUR credentials):
Database Connection (connects to database server via private network)
DATABASE_URL=postgresql+asyncpg://anuzoo_user:YOUR_DB_PASSWORD@DB_SERVER_PRIVATE_IP:5432/anuzoo_prod
DB_SERVER_PRIVATE_IP=YOUR_DB_SERVER_PRIVATE_IPRedis
REDIS_URL=redis://redis:6379/0API
API_HOST=0.0.0.0
API_PORT=8001
API_WORKERS=4
ENVIRONMENT=productionSecurity (use the secrets you generated earlier)
SECRET_KEY=YOUR_SECRET_KEY_HERE
JWT_SECRET_KEY=YOUR_JWT_SECRET_KEY_HERE
SESSION_SECRET_KEY=YOUR_SESSION_SECRET_KEY_HERE
ALLOWED_ORIGINS=https://yourdomain.com,https://www.yourdomain.comStripe
STRIPE_SECRET_KEY=sk_test_YOUR_STRIPE_SECRET_KEY
STRIPE_PUBLISHABLE_KEY=pk_test_YOUR_STRIPE_PUBLISHABLE_KEY
STRIPE_WEBHOOK_SECRET=whsec_YOUR_WEBHOOK_SECRETPinecone
PINECONE_API_KEY=YOUR_PINECONE_API_KEY
PINECONE_ENVIRONMENT=YOUR_PINECONE_ENVIRONMENT
PINECONE_INDEX_NAME=anuzoo-embeddingsOpenAI
OPENAI_API_KEY=sk-YOUR_OPENAI_API_KEYSendGrid
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=587
SMTP_USER=apikey
SMTP_PASSWORD=YOUR_SENDGRID_API_KEY
SMTP_FROM_EMAIL=noreply@yourdomain.comFrontend
FRONTEND_URL=https://yourdomain.com
Save: Ctrl+X, Y, Enter
IMPORTANT: Replace ALL placeholder values!
6. Test Database Connection
Test connection to database server
ping -c 3 $DB_SERVER_PRIVATE_IPShould get replies
Test PostgreSQL port
nc -zv $DB_SERVER_PRIVATE_IP 5432Should see: Connection succeeded
7. Mark Server Type
echo "app" > /etc/anuzoo-server-type
8. Start Application
Make script executable
chmod +x scripts/deploy-multi-server.shStart application services
./scripts/deploy-multi-server.sh app startWait for startup (60 seconds)
sleep 60
9. Verify Application Health
Check health
./scripts/deploy-multi-server.sh app healthShould see:
✓ Redis is healthy
✓ API is healthy
Test API endpoint
curl http://localhost:8001/healthShould return: {"status":"healthy"}
✅ Application Server Running!
---
Part 3: Configure NGINX & SSL (30 minutes)
1. Configure NGINX
On application server
cd /root/anuzooCopy NGINX configuration
cp nginx.conf /etc/nginx/sites-available/anuzooUpdate domain in config
nano /etc/nginx/sites-available/anuzoo
Find and replace yourdomain.com with YOUR actual domain (2 places)
Save and exit
2. Enable Site
Create symlink
ln -s /etc/nginx/sites-available/anuzoo /etc/nginx/sites-enabled/Remove default site
rm /etc/nginx/sites-enabled/defaultTest configuration
nginx -tShould see: syntax is ok, test is successful
3. Get SSL Certificate
Get Let's Encrypt SSL certificate
certbot --nginx -d yourdomain.com -d www.yourdomain.comFollow prompts:
1. Enter email for renewal notifications
2. Agree to terms (Y)
3. Share email? (N)
4. Redirect HTTP to HTTPS? (2 - Yes, redirect)
Certificates will be auto-renewed
4. Reload NGINX
systemctl reload nginx
✅ NGINX & SSL Configured!
---
Part 4: Build & Deploy Frontend (20 minutes)
1. Install Node.js
On application server
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejsVerify
node --version # Should be v20.x.x
npm --version # Should be 10.x.x
2. Build Frontend
cd /root/anuzooInstall dependencies
npm installBuild production bundle
npm run buildThis creates /root/anuzoo/dist folder
3. Deploy Frontend to Web Directory
Create web directory
mkdir -p /var/www/anuzooCopy built files
cp -r dist/* /var/www/anuzoo/Set permissions
chown -R www-data:www-data /var/www/anuzoo
✅ Frontend Deployed!
---
Part 5: Verification & Testing (15 minutes)
1. Test All Services
On database server
./scripts/deploy-multi-server.sh db statusOn application server
./scripts/deploy-multi-server.sh app status
2. Test Website
Open browser and visit:
- https://yourdomain.com (should load frontend)
- https://yourdomain.com/api/health (should show
{"status":"healthy"}) - https://yourdomain.com/docs (should show API documentation)
3. Test Database Connection
On application server
curl http://localhost:8001/api/healthShould return healthy status
4. Check Logs
On application server
./scripts/deploy-multi-server.sh app logs apiShould see normal startup logs, no errors
---
✅ Deployment Complete!
Your Production Infrastructure:
✅ Database Server ($52.80/mo)
- PostgreSQL running and healthy
- Private network configured
- Backups enabled
✅ Application Server ($105.60/mo)
- API running and healthy
- Redis caching working
- NGINX reverse proxy configured
- SSL/HTTPS enabled
- Frontend deployed
✅ Total: $158.40/month
✅ Capacity: 30,000-80,000 users
---
🎉 Next Steps
1. Test Core Features
- ✅ Register a user account
- ✅ Login
- ✅ Create a post
- ✅ Upload an image
- ✅ Test AI detection
2. Setup Monitoring
On application server
chmod +x scripts/install-monitoring.sh
./scripts/install-monitoring.sh your@email.com
3. Setup Automated Backups
On database server
chmod +x scripts/setup-backups.sh
./scripts/setup-backups.sh
4. Run Security Audit
- ✅ Verify HTTPS working
- ✅ Test rate limiting
- ✅ Check firewall rules
- ✅ Review security headers
---
🔧 Useful Commands
Check Status
Database server
./scripts/deploy-multi-server.sh db health
./scripts/deploy-multi-server.sh db statusApplication server
./scripts/deploy-multi-server.sh app health
./scripts/deploy-multi-server.sh app status
View Logs
API logs
./scripts/deploy-multi-server.sh app logs apiDatabase logs
./scripts/deploy-multi-server.sh db logsNGINX logs
tail -f /var/log/nginx/access.log
tail -f /var/log/nginx/error.log
Restart Services
Restart application
./scripts/deploy-multi-server.sh app restartRestart database
./scripts/deploy-multi-server.sh db restartRestart NGINX
systemctl restart nginx
Backup Database
On database server
./scripts/deploy-multi-server.sh db backupBackups stored in: /var/backups/anuzoo/
---
🆘 Troubleshooting
API Not Responding
Check API container
docker psView API logs
./scripts/deploy-multi-server.sh app logs apiRestart API
./scripts/deploy-multi-server.sh app restart
Database Connection Failed
Test private network
ping -c 3 $DB_SERVER_PRIVATE_IPTest PostgreSQL port
nc -zv $DB_SERVER_PRIVATE_IP 5432Check database running
./scripts/deploy-multi-server.sh db health
SSL Certificate Issues
Renew certificate
certbot renewTest NGINX config
nginx -tReload NGINX
systemctl reload nginx
---
📚 Reference Guides
- MULTI_SERVER_SETUP.md - Complete setup reference
- DEPLOYMENT_CHECKLIST.md - Detailed checklist
- MONITORING_GUIDE.md - Monitoring and alerts
- MULTI_APP_HOSTING.md - Add more apps later
---
Congratulations! Your production deployment is live! 🎉
Anuzoo Deployment Options - Cost & Architecture Comparison
Last Updated: 2025-12-14 Purpose: Choose the right deployment architecture for your stage
---
🎯 Quick Recommendation
For MVP Production: Choose Option 2 (2-Server Architecture) at $158.40/month
This gives you:
- ✅ Production-ready for 30K-80K users
- ✅ Independent database scaling
- ✅ Better performance and reliability
- ✅ Easy migration from single-server if you start there
- ✅ Room to grow without architecture changes
---
📊 Deployment Options Comparison
| Option | Architecture | Cost/Month | Users | Complexity | Use Case | |--------|-------------|------------|-------|------------|----------| | Option 1 | Single Server | $105.60 | 20K-50K | Low | Testing, small scale | | Option 2 ⭐ | 2-Server (App + DB) | $158.40 | 30K-80K | Medium | MVP Production | | Option 3 | 3-Server (Frontend + API + DB) | $343.20 | 80K-150K | Medium | Growing business | | Option 4 | 3-Server (App + Streaming + DB) | $422.40 | 50K-100K | Medium | Streaming focus | | Option 5 | 4-Server (Full separation) | $564.40 | 100K-250K | High | High traffic | | Option 6 | 5-Server HA (Load balanced) | $829.40 | 250K-500K | High | Enterprise scale |
---
Option 1: Smart Single Server
Architecture
┌─────────────────────────────────────┐
│ Single Vultr Server ($96/mo) │
│ 6 vCPU, 16GB RAM, 320GB SSD │
├─────────────────────────────────────┤
│ ┌────────────┐ ┌────────────┐ │
│ │ PostgreSQL │ │ Redis │ │
│ │ (6GB RAM) │ │ (2GB RAM) │ │
│ └────────────┘ └────────────┘ │
│ │
│ ┌────────────────────────────────┐ │
│ │ FastAPI + AI Models (8GB RAM) │ │
│ └────────────────────────────────┘ │
└─────────────────────────────────────┘
Specifications
- Server: High Performance AMD
- CPU: 6 vCPU AMD EPYC
- RAM: 16GB
- Storage: 320GB NVMe SSD
- Bandwidth: 5TB/month
- Cost: $96/mo + $9.60 backups = $105.60/month
Resource Allocation
- PostgreSQL: 2 vCPU, 6GB RAM
- Redis: 1 vCPU, 2GB RAM (1.5GB maxmemory)
- FastAPI + AI: 3 vCPU, 8GB RAM
Capacity
- Users: 20,000-50,000 active users
- API Requests: Up to 50/minute per user
- Database: Up to 50GB data
- Uploads: Up to 200GB files
Pros
- ✅ Lowest cost option ($105.60/month)
- ✅ Simplest deployment (one server to manage)
- ✅ Good for testing and initial launch
- ✅ Fast deployment (60 minutes)
- ✅ Easy to upgrade to Option 2 later
Cons
- ❌ Limited scalability
- ❌ Database and API compete for resources
- ❌ Single point of failure
- ❌ Cannot scale components independently
- ❌ CPU-intensive AI models may slow database
When to Use
- Initial testing and development
- Small user base (< 20K users)
- Budget-constrained launch
- Proof of concept
- Planning to migrate to 2-server soon
Setup Guide
See: VULTR_MIGRATION_PLAN.md → Single Server Setup
---
Option 2: 2-Server Architecture (App + Database) ⭐ RECOMMENDED FOR MVP
Architecture
┌───────────────────────────────┐
│ Server 1: Application │
│ 6 vCPU, 16GB RAM ($96/mo) │
├───────────────────────────────┤
│ ┌─────────┐ ┌────────────┐ │
│ │ Redis │ │ FastAPI │ │
│ │ (2GB) │ │ + AI │ │
│ │ │ │ (12GB) │ │
│ └─────────┘ └────────────┘ │
└───────────────────────────────┘
│
│ Private Network (10 Gbps, Free)
│
┌───────────────────────────────┐
│ Server 2: Database │
│ 4 vCPU, 8GB RAM ($48/mo) │
├───────────────────────────────┤
│ ┌─────────────────────────┐ │
│ │ PostgreSQL │ │
│ │ Dedicated 6GB RAM │ │
│ │ Performance Tuned │ │
│ └─────────────────────────┘ │
└───────────────────────────────┘
Specifications
Server 1 (Application):
- CPU: 6 vCPU AMD EPYC
- RAM: 16GB
- Storage: 320GB NVMe SSD
- Cost: $96/mo + $9.60 backups
Server 2 (Database):
- CPU: 4 vCPU AMD EPYC
- RAM: 8GB
- Storage: 160GB NVMe SSD
- Cost: $48/mo + $4.80 backups
Total: $96 + $9.60 + $48 + $4.80 = $158.40/month
Resource Allocation
Application Server:
- Redis: 1 vCPU, 2GB RAM
- FastAPI + AI: 4 vCPU, 12GB RAM (more RAM for AI models)
Database Server:
- PostgreSQL: 3 vCPU, 6GB RAM (dedicated performance)
Capacity
- Users: 30,000-80,000 active users
- API Requests: Up to 80/minute per user
- Database: Up to 120GB data
- Concurrent Connections: Up to 500
- Uploads: Up to 250GB files
Pros
- ✅ Best value for MVP production
- ✅ Database has dedicated resources (no CPU/RAM sharing)
- ✅ Better query performance (isolated database)
- ✅ Independent scaling (upgrade app or DB separately)
- ✅ Private network = fast, secure, free
- ✅ Less resource contention
- ✅ Easier to troubleshoot issues
- ✅ Ready for growth (can scale to 80K users)
Cons
- ❌ Slightly more complex than single server
- ❌ Two servers to manage
- ❌ Requires private networking setup
- ❌ $52.80/month more than single server
When to Use
- MVP production deployment ⭐
- Expecting 10K+ users in first 6 months
- Need reliable database performance
- Want room to grow without architecture changes
- Budget allows $150-200/month
Cost Savings vs High Performance Single Server
Previous plan was 8 vCPU, 32GB RAM single server at $211.20/month.
This 2-server option:
- Saves: $52.80/month ($633.60/year)
- Better performance: Dedicated database resources
- More capacity: 30K-80K users vs 50K-100K (with better performance)
Setup Guide
See: MULTI_SERVER_SETUP.md → Complete 2-Server Setup
Migration from Option 1
See: MIGRATION_SINGLE_TO_MULTI.md → Blue-Green Migration Guide
---
Option 3: 3-Server Architecture (Frontend + API + Database)
Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Server 1: │ │ Server 2: │ │ Server 3: │
│ Frontend │ │ API │ │ Database │
│ 4 vCPU, 8GB │ │ 8 vCPU, 32GB │ │ 6 vCPU, 16GB │
│ $48/mo │ │ $192/mo │ │ $96/mo │
├─────────────────┤ ├─────────────────┤ ├─────────────────┤
│ • NGINX │ │ • FastAPI │ │ • PostgreSQL │
│ • Static files │───>│ • AI Models │───>│ • Optimized │
│ • React build │ │ • Redis │ │ • Replicas │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Cost
$48 + $4.80 + $192 + $19.20 + $96 + $9.60 = $343.20/month
Capacity
- Users: 80,000-150,000 active users
- API Requests: 100/minute per user
- Database: Up to 250GB data
When to Use
- High traffic application
- Need to scale frontend independently
- CDN integration planned
- Multiple API versions
---
Option 4: 3-Server Architecture (App + Streaming + Database)
Architecture
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Server 1: │ │ Server 2: │ │ Server 3: │
│ Application │ │ Streaming │ │ Database │
│ 6 vCPU, 16GB │ │ 8 vCPU, 32GB │ │ 6 vCPU, 16GB │
│ $96/mo │ │ $192/mo │ │ $96/mo │
├─────────────────┤ ├─────────────────┤ ├─────────────────┤
│ • API │ │ • LiveKit │ │ • PostgreSQL │
│ • Redis │───>│ • WebRTC │───>│ • Performance │
│ • AI Models │ │ • Media Server │ │ • Tuned │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Cost
$96 + $9.60 + $192 + $19.20 + $96 + $9.60 + $96 + $9.60 = $422.40/month
Capacity
- Users: 50,000-100,000 active users
- Concurrent Streams: 500-1,000
- Stream Quality: Up to 1080p
When to Use
- Live streaming is core feature
- High concurrent stream count
- Professional streaming quality needed
---
Option 5: 4-Server Full Separation
Architecture
Server 1: Frontend (4 vCPU, 8GB) - $48/mo
Server 2: API (8 vCPU, 32GB) - $192/mo
Server 3: Streaming (8 vCPU, 32GB) - $192/mo
Server 4: Database (6 vCPU, 16GB) - $96/mo
Cost
$564.40/month
Capacity
- Users: 100,000-250,000 active users
- Concurrent Streams: 1,000-2,000
---
Option 6: 5-Server High Availability
Architecture
Server 1: Load Balancer (2 vCPU, 4GB) - $24/mo
Server 2: App Instance 1 (8 vCPU, 32GB) - $192/mo
Server 3: App Instance 2 (8 vCPU, 32GB) - $192/mo
Server 4: Database Primary (8 vCPU, 32GB) - $192/mo
Server 5: Database Replica (8 vCPU, 32GB) - $192/mo
Cost
$829.40/month
Capacity
- Users: 250,000-500,000 active users
- 99.9% uptime guaranteed
- Zero-downtime deployments
---
🚀 Recommended Deployment Path
For New Projects (MVP)
Start → Option 2 (2-Server) → Option 3/4 (Growth) → Option 5/6 (Scale)
$158.40/mo $343-422/mo $564-829/mo
0-80K users 80K-150K users 150K-500K users
Why start with Option 2?
- Production-ready from day one
- Room to grow to 80K users
- Easy to scale up when needed
- Only $52.80/month more than single server
- Avoid the pain of migrating under load
For Budget-Constrained Launch
Start → Option 1 (Single) → Option 2 (2-Server) → Option 3+ (Growth)
$105.60/mo $158.40/mo $343+/mo
0-20K users 20K-80K users 80K+ users
Migration from Option 1 to Option 2: See MIGRATION_SINGLE_TO_MULTI.md
- Downtime: 15-30 minutes
- Difficulty: Medium
- Data loss risk: None
- Time to complete: ~3 hours
---
💰 Cost Analysis
Monthly Costs Over Time
| Users | Recommended Option | Cost/Month | Cost per User | |-------|-------------------|------------|---------------| | 0-20K | Option 1 (Single) | $105.60 | $0.0053 | | 20K-80K | Option 2 (2-Server) | $158.40 | $0.0020 | | 80K-150K | Option 3 (3-Server) | $343.20 | $0.0023 | | 150K-250K | Option 5 (4-Server) | $564.40 | $0.0023 | | 250K+ | Option 6 (5-Server HA) | $829.40 | $0.0017 |
Cost per user decreases as you scale.
Annual Cost Comparison
| Option | Monthly | Annual | Savings vs High Perf Single | |--------|---------|--------|---------------------------| | Single Server | $105.60 | $1,267.20 | $1,267.20/year | | 2-Server ⭐ | $158.40 | $1,900.80 | $633.60/year | | 3-Server | $343.20 | $4,118.40 | -$1,585.20/year | | High Perf Single (8 vCPU, 32GB) | $211.20 | $2,534.40 | Baseline |
---
🔧 Technical Comparison
Database Performance
| Option | DB CPU | DB RAM | Max Connections | Query Speed | |--------|--------|--------|----------------|-------------| | Option 1 | 2 vCPU (shared) | 6GB (shared) | 100 | Good | | Option 2 ⭐ | 3 vCPU (dedicated) | 6GB (dedicated) | 500 | Excellent | | Option 3+ | 4+ vCPU (dedicated) | 12GB+ (dedicated) | 1000+ | Excellent |
API Performance
| Option | API CPU | API RAM | Max Workers | Requests/sec | |--------|---------|---------|-------------|--------------| | Option 1 | 3 vCPU | 8GB | 4 | 200 | | Option 2 ⭐ | 4 vCPU | 12GB | 6 | 400 | | Option 3+ | 6+ vCPU | 20GB+ | 10+ | 800+ |
---
📋 Decision Checklist
Choose Option 1 (Single Server) if:
- [ ] Budget < $120/month
- [ ] Expected users < 20K in first year
- [ ] Testing/proof of concept
- [ ] Planning to migrate to 2-server within 6 months
Choose Option 2 (2-Server) if: ⭐ RECOMMENDED
- [ ] Budget $150-200/month
- [ ] Expected users 10K-80K in first year
- [ ] Want production-ready from day one
- [ ] Need reliable database performance
- [ ] Want room to grow without architecture changes
- [ ] This is your MVP production deployment
Choose Option 3+ (3-5 Servers) if:
- [ ] Budget > $300/month
- [ ] Expected users > 80K in first year
- [ ] High traffic/concurrent users
- [ ] Need high availability
- [ ] Streaming is core feature
---
🎯 Next Steps
If you chose Option 1 (Single Server):
1. ReviewVULTR_MIGRATION_PLAN.md → Single Server Setup
2. Follow DEPLOYMENT_CHECKLIST.md
3. Plan migration to Option 2 when hitting 15K-20K usersIf you chose Option 2 (2-Server): ⭐ RECOMMENDED
1. ReviewMULTI_SERVER_SETUP.md → Complete setup guide
2. Follow DEPLOYMENT_CHECKLIST.md for app server
3. Follow database server setup in MULTI_SERVER_SETUP.md
4. Configure private networkingIf you chose Option 3+:
1. Contact for custom architecture planning 2. Review enterprise deployment guides 3. Plan load balancing and HA setup---
📞 Support
For questions about which option to choose:
- Review capacity estimates carefully
- Consider growth projections (6-12 months)
- Start smaller and migrate up if uncertain
- When in doubt, choose Option 2 - it's the sweet spot for MVP
---
Last Updated: 2025-12-14 See Also:
MULTI_SERVER_SETUP.md- 2-server setup guideMIGRATION_SINGLE_TO_MULTI.md- Migration guideVULTR_MIGRATION_PLAN.md- Single server deploymentDEPLOYMENT_CHECKLIST.md- Complete deployment checklist
Anuzoo 2-Server Setup Guide (Option 1)
Architecture: Application Server + Database Server Monthly Cost: $158.40 User Capacity: 30,000 - 80,000 monthly active users
---
📋 Architecture Overview
┌─────────────────────────────────────────────────────┐
│ Server 1: Application Server │
│ 6 vCPU, 16GB RAM, 320GB SSD │
│ IP: PUBLIC_IP_1 │
│ Private IP: 10.x.x.1 (example) │
│ ┌───────────────────────────────────────────────┐ │
│ │ - FastAPI API (4 workers) │ │
│ │ - NGINX (reverse proxy + frontend) │ │
│ │ - Redis (cache + pub/sub) │ │
│ │ - AI Models (PyTorch + ViT) │ │
│ └───────────────────────────────────────────────┘ │
│ Public: 0.0.0.0:80, 0.0.0.0:443 │
│ Internal: 127.0.0.1:8001 (API), 127.0.0.1:6379 │
└─────────────────────────────────────────────────────┘
│
│ Private Network (10 Gbps, Free)
│ PostgreSQL Connection
▼
┌─────────────────────────────────────────────────────┐
│ Server 2: Database Server │
│ 4 vCPU, 8GB RAM, 160GB SSD │
│ IP: PUBLIC_IP_2 (for SSH only) │
│ Private IP: 10.x.x.2 (example) │
│ ┌───────────────────────────────────────────────┐ │
│ │ - PostgreSQL 15 │ │
│ │ - Optimized for 8GB RAM │ │
│ │ - Extensions: uuid-ossp, pgcrypto, pg_trgm │ │
│ └───────────────────────────────────────────────┘ │
│ Private: 10.x.x.2:5432 (PostgreSQL) │
│ Public: SSH only (port 22) │
└─────────────────────────────────────────────────────┘
---
🎯 Pre-Deployment Checklist
Vultr Account Setup
- [ ] Vultr account created and payment added
- [ ] SSH key generated and added to Vultr
- [ ] Domain DNS configured (A records)
- [ ] API keys collected (Stripe, Pinecone, OpenAI, SendGrid)
Server Provisioning
- [ ] Server 1 (App) provisioned: 6 vCPU, 16GB RAM, $96/mo
- [ ] Server 2 (DB) provisioned: 4 vCPU, 8GB RAM, $48/mo
- [ ] Both servers in same datacenter (required for private network)
- [ ] Both servers have auto backups enabled
- [ ] Both servers have private networking enabled
Network Configuration
- [ ] Private network IPs noted:
- App Server Private IP: ___________________
- DB Server Private IP: ___________________
- [ ] Firewall rules configured (see below)
---
🚀 Step-by-Step Deployment
Step 1: Provision Both Servers on Vultr
Server 1: Application Server
In Vultr Dashboard:
1. Click Deploy + → Deploy New Server
2. Select Cloud Compute
3. Choose Ashburn, VA (or your preferred location)
4. Select Ubuntu 22.04 LTS
5. Choose High Performance AMD
6. Select 6 vCPU, 16GB RAM, 320GB SSD ($96/mo)
7. Enable Private Networking ✓
8. Enable Auto Backups ✓ (+$9.60/mo)
9. Enable DDOS Protection ✓ (Free)
10. Hostname: anuzoo-app-01
11. Label: Anuzoo Application
12. Click Deploy Now
Server 2: Database Server
In Vultr Dashboard:
1. Click Deploy + → Deploy New Server
2. Select Cloud Compute
3. Choose SAME LOCATION as Server 1 (critical!)
4. Select Ubuntu 22.04 LTS
5. Choose High Performance AMD
6. Select 4 vCPU, 8GB RAM, 160GB SSD ($48/mo)
7. Enable Private Networking ✓ (must match Server 1's network)
8. Enable Auto Backups ✓ (+$4.80/mo)
9. Enable DDOS Protection ✓ (Free)
10. Hostname: anuzoo-db-01
11. Label: Anuzoo Database
12. Click Deploy Now
Wait 2-3 minutes for both servers to provision.
---
Step 2: Note Server IPs
Once both servers are running, note their IPs:
Server 1 (Application):
- Public IPv4:
___________________(for HTTPS traffic) - Private IPv4:
___________________(for DB connection)
Server 2 (Database):
- Public IPv4:
___________________(SSH only) - Private IPv4:
___________________(PostgreSQL listening)
Verify Private Network:
On Server 1, should see something like:
ip addr show | grep "10\."
Output: inet 10.1.96.2/20 ...
On Server 2, should see:
ip addr show | grep "10\."
Output: inet 10.1.96.3/20 ...
---
Step 3: Configure Firewall Rules
Server 1 (Application) - Vultr Firewall
Create firewall group: Anuzoo-App-Server
| Direction | Protocol | Port | Source | Description | |-----------|----------|------|--------|-------------| | Inbound | TCP | 22 | 0.0.0.0/0 | SSH | | Inbound | TCP | 80 | 0.0.0.0/0 | HTTP | | Inbound | TCP | 443 | 0.0.0.0/0 | HTTPS | | Inbound | ICMP | - | 0.0.0.0/0 | Ping | | Outbound | All | All | 0.0.0.0/0 | All outbound |
Attach to Server 1.
Server 2 (Database) - Vultr Firewall
Create firewall group: Anuzoo-DB-Server
| Direction | Protocol | Port | Source | Description | |-----------|----------|------|--------|-------------| | Inbound | TCP | 22 | 0.0.0.0/0 | SSH (admin access) | | Inbound | TCP | 5432 | 10.0.0.0/8 | PostgreSQL (private network only) | | Inbound | ICMP | - | 10.0.0.0/8 | Ping (private network) | | Outbound | All | All | 0.0.0.0/0 | All outbound |
Attach to Server 2.
Security Note: Database is ONLY accessible via private network!
---
Step 4: Initial Server Setup
On Both Servers
SSH into each server and run:
Update system
apt-get update && apt-get upgrade -yInstall Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.shInstall Docker Compose
apt-get install -y docker-compose-pluginInstall useful tools
apt-get install -y git vim htop curl wget net-tools
---
Step 5: Clone Repository on Both Servers
On Server 1 (Application)
cd /root
git clone https://github.com/yourusername/anuzoo.git
cd anuzooOn Server 2 (Database)
cd /root
git clone https://github.com/yourusername/anuzoo.git
cd anuzoo
---
Step 6: Configure Environment Variables
On Server 1 (Application)
Create .env.production:
cd /root/anuzoo
cp .env.production.example .env.production
nano .env.production
Key configuration for Server 1:
Database - CONNECTS TO SERVER 2 VIA PRIVATE NETWORK
DATABASE_URL=postgresql+asyncpg://anuzoo_user:YOUR_PASSWORD@10.x.x.x:5432/anuzoo_prod
DB_SERVER_PRIVATE_IP=10.x.x.x # Server 2's private IP
POSTGRES_USER=anuzoo_user
POSTGRES_PASSWORD=YOUR_SECURE_PASSWORD
POSTGRES_DB=anuzoo_prodRedis - LOCAL ON THIS SERVER
REDIS_URL=redis://redis:6379/0API Configuration
API_HOST=0.0.0.0
API_PORT=8001
API_WORKERS=4
ENVIRONMENT=productionSecurity Keys (generate with: openssl rand -hex 32)
SECRET_KEY=YOUR_SECRET_KEY_HERE
JWT_SECRET_KEY=YOUR_JWT_SECRET_HERE
SESSION_SECRET_KEY=YOUR_SESSION_SECRET_HEREExternal Services
STRIPE_SECRET_KEY=sk_live_YOUR_KEY
STRIPE_PUBLISHABLE_KEY=pk_live_YOUR_KEY
STRIPE_WEBHOOK_SECRET=whsec_YOUR_SECRET
PINECONE_API_KEY=YOUR_KEY
PINECONE_ENVIRONMENT=YOUR_ENV
OPENAI_API_KEY=sk-YOUR_KEY
SMTP_PASSWORD=SG.YOUR_SENDGRID_KEY
SMTP_FROM_EMAIL=noreply@yourdomain.comFrontend URL
FRONTEND_URL=https://yourdomain.com
ALLOWED_ORIGINS=https://yourdomain.com,https://www.yourdomain.com
On Server 2 (Database)
Create .env.production:
cd /root/anuzoo
cp .env.production.example .env.production
nano .env.production
Key configuration for Server 2:
Database Configuration - SAME CREDENTIALS AS SERVER 1
POSTGRES_USER=anuzoo_user
POSTGRES_PASSWORD=YOUR_SECURE_PASSWORD # Must match Server 1
POSTGRES_DB=anuzoo_prodPostgreSQL Performance Tuning
POSTGRES_SHARED_BUFFERS=2GB
POSTGRES_EFFECTIVE_CACHE_SIZE=6GB
POSTGRES_MAINTENANCE_WORK_MEM=512MB
POSTGRES_WORK_MEM=64MB
POSTGRES_MAX_CONNECTIONS=200
---
Step 7: Mark Server Types
This helps the deployment script know which server it's on:
On Server 1 (Application)
echo "app" > /etc/anuzoo-server-typeOn Server 2 (Database)
echo "db" > /etc/anuzoo-server-type
---
Step 8: Deploy Services
On Server 2 (Database) - Deploy FIRST
cd /root/anuzooMake deployment script executable
chmod +x scripts/deploy-multi-server.shCreate data directories
mkdir -p /var/lib/anuzoo/postgresStart database
./scripts/deploy-multi-server.sh db startWait 30 seconds for PostgreSQL to initialize
sleep 30Check status
./scripts/deploy-multi-server.sh db health
Expected Output:
✓ PostgreSQL is healthy
ℹ Active database connections: 1
On Server 1 (Application) - Deploy SECOND
cd /root/anuzooMake deployment script executable
chmod +x scripts/deploy-multi-server.shTest database connection first
./scripts/deploy-multi-server.sh app test-networkIf network test passes, start services
./scripts/deploy-multi-server.sh app startWait for services to start
sleep 60Check status
./scripts/deploy-multi-server.sh app health
Expected Output:
✓ Redis is healthy
✓ API is healthy
---
Step 9: Run Database Migrations
On Server 1 (Application):
Run Alembic migrations
docker compose -f docker-compose.app-server.yml exec api alembic upgrade headVerify tables were created
docker compose -f docker-compose.app-server.yml exec -e PGPASSWORD=$POSTGRES_PASSWORD api \
psql -h $DB_SERVER_PRIVATE_IP -U $POSTGRES_USER -d $POSTGRES_DB -c "\dt"
You should see all your database tables listed.
---
Step 10: Install and Configure NGINX
On Server 1 (Application):
Update nginx.conf with your domain
cd /root/anuzoo
nano nginx.conf
Replace all instances of "yourdomain.com" with your actual domain
Run NGINX installation script
chmod +x scripts/install-nginx.sh
./scripts/install-nginx.sh yourdomain.com your@email.comWait for SSL certificate installation
Follow the prompts
---
Step 11: Deploy Frontend
On your local machine:
Build frontend
cd /path/to/anuzoo
npm install
npm run buildUpload to Server 1
scp -r dist root@YOUR_SERVER_1_IP:/var/www/anuzoo/
OR build on Server 1:
On Server 1
cd /root/anuzooInstall Node.js
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejsBuild frontend
npm install
npm run buildCopy to web directory
cp -r dist /var/www/anuzoo/
chown -R www-data:www-data /var/www/anuzoo
---
Step 12: Install Monitoring
On Server 1 (Application):
chmod +x scripts/install-monitoring.sh
./scripts/install-monitoring.sh your@email.com
On Server 2 (Database):
chmod +x scripts/install-monitoring.sh
./scripts/install-monitoring.sh your@email.com
---
✅ Verification Checklist
Test Database Connectivity
From Server 1:
Test PostgreSQL connection
docker compose -f docker-compose.app-server.yml exec api \
python -c "
from sqlalchemy import create_engine
import os
engine = create_engine(os.getenv('DATABASE_URL'))
with engine.connect() as conn:
result = conn.execute('SELECT version()')
print(result.fetchone()[0])
"
Should print PostgreSQL version.
Test API Endpoints
Health check
curl http://localhost:8001/healthShould return: {"status":"healthy"}
Test Web Access
1. Visit: https://yourdomain.com
2. Should load React frontend
3. Try logging in
4. Check browser console for errors
Test Private Network Speed
From Server 1:
Install iperf3
apt-get install -y iperf3On Server 2 (run in separate terminal):
iperf3 -sOn Server 1:
iperf3 -c $DB_SERVER_PRIVATE_IPShould show ~10 Gbps throughput
---
📊 Resource Monitoring
Check Resource Usage
On Server 1 (Application):
./scripts/deploy-multi-server.sh app status
On Server 2 (Database):
./scripts/deploy-multi-server.sh db status
Monitor Database Performance
On Server 2:
Active connections
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SELECT count(*) FROM pg_stat_activity;"Slow queries
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SELECT query, calls, total_time, mean_time FROM pg_stat_statements ORDER BY mean_time DESC LIMIT 10;"Database size
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SELECT pg_size_pretty(pg_database_size('anuzoo_prod'));"
---
🔄 Common Operations
Restart Application (Zero DB Downtime)
On Server 1
./scripts/deploy-multi-server.sh app restart
Database stays running, users may see brief API errors during restart.
Restart Database (Brief Downtime)
On Server 2
./scripts/deploy-multi-server.sh db restart
Application will automatically reconnect when database comes back up.
View Logs
On Server 1 - API logs
./scripts/deploy-multi-server.sh app logs apiOn Server 1 - Redis logs
./scripts/deploy-multi-server.sh app logs redisOn Server 2 - Database logs
./scripts/deploy-multi-server.sh db logs
Backup Database
On Server 2
./scripts/deploy-multi-server.sh db backupDownload backup to local machine
scp root@SERVER_2_IP:/var/backups/anuzoo/db_*.sql.gz ./
---
🔧 Troubleshooting
Issue: Cannot Connect to Database from App Server
Check private network connectivity:
On Server 1
ping -c 3 $DB_SERVER_PRIVATE_IPTest PostgreSQL port
nc -zv $DB_SERVER_PRIVATE_IP 5432
If ping fails:
- Verify both servers in same datacenter
- Check private networking enabled on both
- Verify firewall allows traffic on port 5432
Issue: "Connection Refused" on Port 5432
On Server 2, check PostgreSQL is listening:
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SHOW listen_addresses;"Should show: 0.0.0.0 or *
Check PostgreSQL logs:
./scripts/deploy-multi-server.sh db logs
Issue: Slow Database Queries
Check connection pool:
On Server 2
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SELECT count(*), state FROM pg_stat_activity GROUP BY state;"
Optimize if needed:
- Increase
max_connectionsin .env.production - Add indexes to slow queries
- Consider read replicas for scale
---
💰 Cost Summary
| Item | Quantity | Unit Cost | Total | |------|----------|-----------|-------| | Application Server | 1 | $96.00 | $96.00 | | 6 vCPU, 16GB RAM, 320GB SSD | | | | | Database Server | 1 | $48.00 | $48.00 | | 4 vCPU, 8GB RAM, 160GB SSD | | | | | Auto Backups | 2 | $14.40 | $14.40 | | (10% of server costs) | | | | | | | | | | Monthly Total | | | $158.40 | | Annual Total | | | $1,900.80 |
Bandwidth: 5TB included (App) + 4TB (DB) = 9TB total Private network traffic: Unlimited and free (10 Gbps)
---
🎯 Performance Expectations
User Capacity: 30,000 - 80,000 monthly active users Concurrent Users: 300 - 800 simultaneous API Response Time: 50-150ms (database queries much faster via private network) Database Queries: ~1,000-2,000 queries/second Private Network Latency: <1ms
---
🚀 Upgrade Path
When you hit 60K-80K users consistently:
Option A: Upgrade Application Server
- Resize to 8 vCPU, 32GB RAM ($192/mo)
- Total: $254.40/month
- Capacity: 80K-150K users
Option B: Add Application Server (Load Balanced)
- Add second app server ($96/mo)
- Add load balancer ($10/mo)
- Total: $264.40/month
- Capacity: 100K-200K users
- High availability (99.9% uptime)
Option C: Upgrade Database Server
- Resize DB to 8 vCPU, 16GB RAM ($96/mo)
- Total: $206.40/month
- Better for database-heavy workloads
---
📝 Quick Reference Commands
Start services
./scripts/deploy-multi-server.sh app start # On Server 1
./scripts/deploy-multi-server.sh db start # On Server 2Check status
./scripts/deploy-multi-server.sh app status
./scripts/deploy-multi-server.sh db statusView logs
./scripts/deploy-multi-server.sh app logs api
./scripts/deploy-multi-server.sh db logsHealth checks
./scripts/deploy-multi-server.sh app health
./scripts/deploy-multi-server.sh db healthBackup database
./scripts/deploy-multi-server.sh db backupTest network
./scripts/deploy-multi-server.sh app test-networkQuick status dashboard
anuzoo-status # Run on either server
---
✅ Success Criteria
Deployment is successful when:
- ✅ Both servers accessible via SSH
- ✅ Private network connectivity working (ping and port 5432)
- ✅ PostgreSQL healthy on Server 2
- ✅ API healthy on Server 1
- ✅ Redis healthy on Server 1
- ✅ Frontend loads at https://yourdomain.com
- ✅ Can login and use application
- ✅ Database migrations applied
- ✅ Monitoring scripts running
- ✅ Backups configured
Congratulations! Your 2-server architecture is live! 🎉
Multi-App Hosting Guide - Shared 2-Server Infrastructure
Use Case: Host multiple applications/websites on the same 2-server infrastructure Goal: Maximize cost-efficiency while maintaining performance
---
🎯 Can You Host Multiple Apps? YES!
Your 2-server infrastructure can host:
- Unlimited PostgreSQL databases (practical limit based on resources)
- Multiple APIs/applications (different containers or ports)
- Multiple websites (different domains routed via NGINX)
Capacity Estimates
| Server Resources | Small Apps | Medium Apps | Large Apps | |------------------|------------|-------------|------------| | DB: 4 vCPU, 8GB RAM | 5-10 apps | 2-3 apps | 1 app | | App: 6 vCPU, 16GB RAM | 5-10 apps | 2-3 apps | 1 app |
App Size Definitions:
- Small: < 5K users, < 10GB database, < 50 req/min
- Medium: 10K-50K users, 10-50GB database, 50-200 req/min (like Anuzoo)
- Large: 50K+ users, 50GB+ database, 200+ req/min
---
📊 Resource Allocation Examples
Example 1: 3 Medium Apps (Recommended Max)
Database Server (4 vCPU, 8GB RAM):
PostgreSQL (6GB usable for databases):
├─ anuzoo_prod (2.5GB RAM, 40GB disk)
├─ marketplace_prod (2GB RAM, 30GB disk)
└─ blog_prod (1.5GB RAM, 20GB disk)Total: 6GB RAM allocated, 90GB disk used
Headroom: 2GB RAM, 70GB disk available
Application Server (6 vCPU, 16GB RAM):
Services:
├─ Redis (shared, 2GB RAM)
├─ Anuzoo API (5GB RAM, 2 vCPU)
├─ Marketplace API (4GB RAM, 2 vCPU)
└─ Blog API (3GB RAM, 1 vCPU)Total: 14GB RAM allocated, 5 vCPU
Headroom: 2GB RAM, 1 vCPU
Cost: $158.40/month for all 3 apps = $52.80 per app
---
Example 2: 2 Medium + 3 Small Apps
Database Server:
PostgreSQL (6GB usable):
├─ anuzoo_prod (2.5GB RAM, 40GB disk) - Medium
├─ saas_app_prod (2GB RAM, 30GB disk) - Medium
├─ blog1_prod (500MB RAM, 10GB disk) - Small
├─ blog2_prod (500MB RAM, 10GB disk) - Small
└─ portfolio_prod (500MB RAM, 5GB disk) - SmallTotal: 6GB RAM, 95GB disk
Headroom: 2GB RAM, 65GB disk
Application Server:
Services:
├─ Redis (shared, 2GB RAM)
├─ Anuzoo API (5GB RAM, 2 vCPU)
├─ SaaS API (4GB RAM, 2 vCPU)
├─ Blog 1 API (1.5GB RAM, 0.5 vCPU)
├─ Blog 2 API (1.5GB RAM, 0.5 vCPU)
└─ Portfolio API (1GB RAM, 0.5 vCPU)Total: 15GB RAM, 5.5 vCPU
Headroom: 1GB RAM, 0.5 vCPU
Cost: $158.40/month for 5 apps = $31.68 per app
---
Example 3: 1 Large App Only (Current Setup)
Database Server:
PostgreSQL:
└─ anuzoo_prod (6GB RAM, 120GB disk)Total: 6GB RAM, 120GB disk
Headroom: 2GB RAM, 40GB disk
Application Server:
Services:
├─ Redis (2GB RAM)
└─ Anuzoo API (12GB RAM, 4 vCPU)Total: 14GB RAM, 4 vCPU
Headroom: 2GB RAM, 2 vCPU
Cost: $158.40/month = $158.40 per app
---
🔧 Implementation: Multi-Database Setup
Step 1: Create Additional Databases
On Database Server:
SSH into database server
ssh root@DB_SERVER_IPAccess PostgreSQL
docker compose -f docker-compose.db-server.yml exec db psql -U anuzoo_user -d postgresCreate new databases
CREATE DATABASE app2_prod;
CREATE DATABASE app3_prod;Create separate users (recommended for isolation)
CREATE USER app2_user WITH ENCRYPTED PASSWORD 'STRONG_PASSWORD_2';
CREATE USER app3_user WITH ENCRYPTED PASSWORD 'STRONG_PASSWORD_3';Grant permissions
GRANT ALL PRIVILEGES ON DATABASE app2_prod TO app2_user;
GRANT ALL PRIVILEGES ON DATABASE app3_prod TO app3_user;Exit
\q
Verify databases:
docker compose -f docker-compose.db-server.yml exec db psql -U anuzoo_user -d postgres -c "\l"
Expected output:
List of databases
Name | Owner | Encoding | Collate | Ctype
--------------+--------------+----------+-------------+-------------
anuzoo_prod | anuzoo_user | UTF8 | en_US.UTF-8 | en_US.UTF-8
app2_prod | app2_user | UTF8 | en_US.UTF-8 | en_US.UTF-8
app3_prod | app3_user | UTF8 | en_US.UTF-8 | en_US.UTF-8
---
Step 2: Configure Additional API Containers
On Application Server:
Create docker-compose.multi-app.yml:
version: '3.8'services:
# Shared Redis
redis:
image: redis:7-alpine
container_name: shared_redis
restart: always
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis_data:/data
command: >
redis-server
--appendonly yes
--maxmemory 1536mb
--maxmemory-policy allkeys-lru
networks:
- multi_app_network
# App 1: Anuzoo
anuzoo_api:
build:
context: ./anuzoo/apps/api
dockerfile: Dockerfile.production
container_name: anuzoo_api
restart: always
env_file:
- ./anuzoo/.env.production
environment:
- DATABASE_URL=postgresql+asyncpg://anuzoo_user:${ANUZOO_DB_PASSWORD}@${DB_SERVER_PRIVATE_IP}:5432/anuzoo_prod
- REDIS_URL=redis://redis:6379/0
ports:
- "127.0.0.1:8001:8001"
depends_on:
- redis
networks:
- multi_app_network
deploy:
resources:
limits:
cpus: '2'
memory: 5G # App 2: Marketplace
marketplace_api:
build:
context: ./marketplace/api
dockerfile: Dockerfile
container_name: marketplace_api
restart: always
env_file:
- ./marketplace/.env.production
environment:
- DATABASE_URL=postgresql+asyncpg://app2_user:${APP2_DB_PASSWORD}@${DB_SERVER_PRIVATE_IP}:5432/app2_prod
- REDIS_URL=redis://redis:6379/1
ports:
- "127.0.0.1:8002:8002"
depends_on:
- redis
networks:
- multi_app_network
deploy:
resources:
limits:
cpus: '2'
memory: 4G # App 3: Blog
blog_api:
build:
context: ./blog/api
dockerfile: Dockerfile
container_name: blog_api
restart: always
env_file:
- ./blog/.env.production
environment:
- DATABASE_URL=postgresql+asyncpg://app3_user:${APP3_DB_PASSWORD}@${DB_SERVER_PRIVATE_IP}:5432/app3_prod
- REDIS_URL=redis://redis:6379/2
ports:
- "127.0.0.1:8003:8003"
depends_on:
- redis
networks:
- multi_app_network
deploy:
resources:
limits:
cpus: '1'
memory: 3Gvolumes:
redis_data:
driver: local
networks:
multi_app_network:
driver: bridge
Key Points:
- Each API runs on different port (8001, 8002, 8003)
- Redis uses different database numbers (0, 1, 2) for isolation
- Resource limits ensure fair sharing
- All connect to same database server via private IP
---
Step 3: Configure NGINX for Multiple Domains
Create /etc/nginx/sites-available/multi-app:
Upstream backends
upstream anuzoo_backend {
server 127.0.0.1:8001;
keepalive 32;
}upstream marketplace_backend {
server 127.0.0.1:8002;
keepalive 32;
}
upstream blog_backend {
server 127.0.0.1:8003;
keepalive 32;
}
Rate limiting zones
limit_req_zone $binary_remote_addr zone=anuzoo_limit:10m rate=50r/m;
limit_req_zone $binary_remote_addr zone=marketplace_limit:10m rate=50r/m;
limit_req_zone $binary_remote_addr zone=blog_limit:10m rate=100r/m;App 1: Anuzoo
server {
listen 443 ssl http2;
server_name anuzoo.com www.anuzoo.com; ssl_certificate /etc/letsencrypt/live/anuzoo.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/anuzoo.com/privkey.pem;
root /var/www/anuzoo/dist;
index index.html;
location /api/ {
limit_req zone=anuzoo_limit burst=20 nodelay;
proxy_pass http://anuzoo_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
try_files $uri $uri/ /index.html;
}
}
App 2: Marketplace
server {
listen 443 ssl http2;
server_name marketplace.com www.marketplace.com; ssl_certificate /etc/letsencrypt/live/marketplace.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/marketplace.com/privkey.pem;
root /var/www/marketplace/dist;
index index.html;
location /api/ {
limit_req zone=marketplace_limit burst=20 nodelay;
proxy_pass http://marketplace_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
try_files $uri $uri/ /index.html;
}
}
App 3: Blog
server {
listen 443 ssl http2;
server_name blog.com www.blog.com; ssl_certificate /etc/letsencrypt/live/blog.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/blog.com/privkey.pem;
root /var/www/blog/dist;
index index.html;
location /api/ {
limit_req zone=blog_limit burst=50 nodelay;
proxy_pass http://blog_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
try_files $uri $uri/ /index.html;
}
}
Enable configuration:
ln -s /etc/nginx/sites-available/multi-app /etc/nginx/sites-enabled/
nginx -t
systemctl reload nginx
Get SSL certificates:
certbot --nginx -d anuzoo.com -d www.anuzoo.com
certbot --nginx -d marketplace.com -d www.marketplace.com
certbot --nginx -d blog.com -d www.blog.com
---
📊 Resource Monitoring for Multi-App
Monitor All Databases
On database server
docker compose -f docker-compose.db-server.yml exec db psql -U anuzoo_user -d postgresCheck database sizes
SELECT
datname AS database,
pg_size_pretty(pg_database_size(datname)) AS size,
(SELECT count(*) FROM pg_stat_activity WHERE datname = d.datname) AS connections
FROM pg_database d
WHERE datname NOT IN ('template0', 'template1', 'postgres')
ORDER BY pg_database_size(datname) DESC;
Expected output:
database | size | connections
---------------+--------+-------------
anuzoo_prod | 25 GB | 45
app2_prod | 15 GB | 20
app3_prod | 5 GB | 10
Monitor Memory Usage per Database
Check PostgreSQL memory usage by database
docker compose -f docker-compose.db-server.yml exec db psql -U anuzoo_user -d postgres -c "
SELECT
datname,
count(*) as connections,
sum(CASE WHEN state = 'active' THEN 1 ELSE 0 END) as active,
sum(CASE WHEN state = 'idle' THEN 1 ELSE 0 END) as idle
FROM pg_stat_activity
WHERE datname IS NOT NULL
GROUP BY datname
ORDER BY connections DESC;
"
Monitor API Container Resources
On application server
docker stats --no-stream --format "table {{.Container}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.MemPerc}}"
Expected output:
CONTAINER CPU % MEM USAGE / LIMIT MEM %
anuzoo_api 25.5% 4.2GB / 5GB 84%
marketplace_api 15.2% 2.8GB / 4GB 70%
blog_api 8.1% 1.5GB / 3GB 50%
shared_redis 2.3% 1.1GB / 2GB 55%
---
⚠️ Resource Allocation Best Practices
1. Database Connection Limits
Configure max connections per database:
-- On database server
ALTER DATABASE anuzoo_prod CONNECTION LIMIT 100;
ALTER DATABASE app2_prod CONNECTION LIMIT 50;
ALTER DATABASE app3_prod CONNECTION LIMIT 25;
In your app code (SQLAlchemy example):
anuzoo - uses 100 max connections
engine = create_async_engine(
DATABASE_URL,
pool_size=20, # Base connections
max_overflow=10, # Additional connections
pool_timeout=30,
pool_recycle=3600
)app2 - uses 50 max connections
engine = create_async_engine(
DATABASE_URL,
pool_size=10,
max_overflow=5,
pool_timeout=30,
pool_recycle=3600
)
2. Redis Database Isolation
Use separate Redis database numbers:
- App 1 (Anuzoo):
redis://redis:6379/0 - App 2 (Marketplace):
redis://redis:6379/1 - App 3 (Blog):
redis://redis:6379/2
Redis has 16 databases (0-15) by default.
3. CPU Pinning (Advanced)
Pin containers to specific CPU cores:
In docker-compose.multi-app.yml
services:
anuzoo_api:
cpuset: "0,1" # Use cores 0 and 1 marketplace_api:
cpuset: "2,3" # Use cores 2 and 3
blog_api:
cpuset: "4,5" # Use cores 4 and 5
4. Disk Space Monitoring
Create alert when database size exceeds threshold
docker compose -f docker-compose.db-server.yml exec db psql -U anuzoo_user -d postgres -c "
SELECT
datname,
pg_size_pretty(pg_database_size(datname)) as size,
CASE
WHEN pg_database_size(datname) > 50000000000 THEN 'WARNING: Over 50GB'
WHEN pg_database_size(datname) > 30000000000 THEN 'CAUTION: Over 30GB'
ELSE 'OK'
END as status
FROM pg_database
WHERE datname NOT IN ('template0', 'template1', 'postgres');
"
---
💰 Cost Comparison
Shared Infrastructure (Multi-App Hosting)
| Apps | Total Cost | Cost per App | Savings vs Dedicated | |------|-----------|--------------|---------------------| | 1 app | $158.40/mo | $158.40 | Baseline | | 2 apps | $158.40/mo | $79.20 | $237.60/mo saved | | 3 apps | $158.40/mo | $52.80 | $316.80/mo saved | | 5 apps | $158.40/mo | $31.68 | $633.60/mo saved |
Dedicated Infrastructure (Isolated Apps)
| Apps | Total Cost | Cost per App | |------|-----------|--------------| | 1 app | $158.40/mo | $158.40 | | 2 apps | $316.80/mo | $158.40 | | 3 apps | $475.20/mo | $158.40 | | 5 apps | $792.00/mo | $158.40 |
Break-even point: If resource contention becomes an issue, dedicated infrastructure may be worth it for high-traffic apps.
---
🎯 When to Use Shared vs Dedicated
Use Shared Infrastructure When:
✅ Apps are in same ecosystem (you own all of them) ✅ Apps have complementary traffic patterns (one busy in AM, other in PM) ✅ Total users < 80K across all apps ✅ Budget-conscious deployment ✅ Apps are related or from same business ✅ You can tolerate some resource sharing
Use Dedicated Infrastructure When:
✅ Apps serve different customers (multi-tenant SaaS) ✅ Apps require guaranteed performance (SLAs) ✅ Apps have different compliance requirements ✅ Apps have different scaling needs ✅ Total users > 80K per app ✅ Apps are unrelated businesses ✅ You need complete isolation for security/legal reasons
---
🔧 Backup Strategy for Multi-App
Create separate backup script:
#!/bin/bash
/root/backup-multi-app.sh
BACKUP_DIR="/var/backups/multi-app"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR/{anuzoo,app2,app3}
Backup each database separately
docker compose -f docker-compose.db-server.yml exec -T db \
pg_dump -U anuzoo_user anuzoo_prod | gzip > $BACKUP_DIR/anuzoo/db_$DATE.sql.gzdocker compose -f docker-compose.db-server.yml exec -T db \
pg_dump -U app2_user app2_prod | gzip > $BACKUP_DIR/app2/db_$DATE.sql.gz
docker compose -f docker-compose.db-server.yml exec -T db \
pg_dump -U app3_user app3_prod | gzip > $BACKUP_DIR/app3/db_$DATE.sql.gz
Keep only last 7 days per app
find $BACKUP_DIR/anuzoo -name "*.gz" -mtime +7 -delete
find $BACKUP_DIR/app2 -name "*.gz" -mtime +7 -delete
find $BACKUP_DIR/app3 -name "*.gz" -mtime +7 -deleteecho "Multi-app backup completed: $DATE"
Schedule in crontab:
Daily backups at 2 AM
0 2 * /root/backup-multi-app.sh
---
📈 Scaling Triggers
When to Add More Servers
Add 2nd Application Server when:
- CPU usage consistently > 75% on app server
- API response times > 500ms
- Total users > 60K across all apps
Upgrade Database Server when:
- PostgreSQL using > 6GB RAM consistently
- Slow query times (> 200ms avg)
- Connection pool exhaustion
- Total database size > 120GB
Split into Dedicated Infrastructure when:
- One app growing much faster than others
- Resource contention causing issues
- Need guaranteed performance for specific app
- Compliance or security requirements change
---
✅ Summary
Yes, you can host multiple apps on your 2-server structure!
Recommended configurations:
- 2-3 medium apps: Best performance, clear resource allocation
- 5-10 small apps: Maximum cost efficiency
- 1 large + 2-3 small: Mix heavy and light workloads
Key considerations: 1. Database isolation: Separate databases and users 2. Resource limits: Set memory/CPU limits per container 3. Monitoring: Track per-app resource usage 4. Backups: Separate backup schedules 5. Scaling: Plan for when to split into dedicated infrastructure
Cost savings example:
- 3 apps on shared infrastructure: $52.80/app
- 3 apps on dedicated infrastructure: $158.40/app
- Monthly savings: $316.80
- Annual savings: $3,801.60
---
Next Steps: 1. Decide how many apps you want to host 2. Follow implementation guide above 3. Set up monitoring per-app 4. Plan resource allocation 5. Configure backups for each database
Questions? See:
MULTI_SERVER_SETUP.md- Original 2-server setupDEPLOYMENT_OPTIONS.md- Scaling options when you outgrow shared infrastructure
Vultr Account Setup Guide
Complete step-by-step guide for setting up your Vultr account and preparing for Anuzoo deployment.
---
Part 1: Account Creation
Step 1: Create Vultr Account
1. Go to https://vultr.com 2. Click Sign Up in the top right 3. Fill in:
- Email address
- Password (strong password recommended)
- Agree to Terms of Service
Step 2: Add Payment Method
1. Log into Vultr dashboard 2. Go to Billing → Payment Methods 3. Choose payment type:
- Credit/Debit Card (recommended)
- PayPal
- Crypto (Bitcoin, Ethereum)
Step 3: Add Starting Credit (Optional)
- New accounts often get promotional credits ($100-$250)
- Check Billing → Credits to see available promotions
- Apply any promo codes in the Promo Code section
---
Part 2: Server Provisioning
Phase 1 Production Server (Recommended)
Server Specifications:
- Type: Cloud Compute
- Plan: High Performance AMD
- CPU: 6 vCPU
- RAM: 16 GB
- Storage: 320 GB NVMe SSD
- Bandwidth: 5 TB included
- Cost: $96/month + $9.60 backups = $105.60/month
Provisioning Steps:
1. Click Deploy + button in dashboard 2. Select Deploy New Server
Choose Server Type
- Select: Cloud Compute
Choose Location
Pick the closest datacenter to your USA users:- Ashburn, VA (IAD) - East Coast (recommended for nationwide)
- Hillsboro, OR (PDX) - West Coast
- Seattle, WA (SEA) - Pacific Northwest
- Dallas, TX (DFW) - Central USA
- Chicago, IL (ORD) - Midwest
- Atlanta, GA (ATL) - Southeast
- New York/NJ (EWR) - Northeast
- Miami, FL (MIA) - Southeast
- Los Angeles, CA (LAX) - West Coast
- Silicon Valley, CA (SJC) - West Coast
Recommendation: Choose Ashburn, VA for best average latency across USA.
Choose Server Image
- Select Operating System tab
- Choose: Ubuntu 22.04 LTS x64
Choose Server Size
- Select High Performance tab
- Click: 6 vCPU / 16GB RAM / 320GB SSD ($96/mo)
Additional Features (Optional but Recommended)
- ☑ Enable IPv6 (Free)
- ☑ Enable Auto Backups (+$9.60/mo = 10% of server cost)
- ☑ Enable DDOS Protection (Free)
- ☐ Skip: Private Networking (not needed for single server)
- ☐ Skip: Managed Databases (we're using Docker PostgreSQL)
Server Settings
- Server Hostname:
anuzoo-prod-01(or your preferred name) - Server Label:
Anuzoo Production - SSH Keys: We'll add this next
Deploy
1. Review configuration 2. Click Deploy Now 3. Server will provision in 60-90 seconds 4. Note the IP Address shown in dashboard---
Part 3: SSH Key Setup
Option A: Generate New SSH Key (Recommended)
On your local machine:
Generate SSH key pair
ssh-keygen -t ed25519 -C "your_email@example.com"When prompted:
Enter file: /c/Users/YOUR_USERNAME/.ssh/vultr_anuzoo
Enter passphrase: [choose strong passphrase]
Display public key
cat ~/.ssh/vultr_anuzoo.pub
Copy the public key output (starts with ssh-ed25519 ...)
Add to Vultr:
1. In Vultr dashboard, go to Settings → SSH Keys
2. Click Add SSH Key
3. Paste public key
4. Name: Anuzoo Production
5. Click Add SSH Key
Add to existing server:
SSH into server with password (shown in Vultr dashboard)
ssh root@YOUR_SERVER_IPCreate .ssh directory
mkdir -p ~/.ssh
chmod 700 ~/.sshAdd public key
echo "YOUR_PUBLIC_KEY_HERE" >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keysExit and test SSH key login
exit
ssh -i ~/.ssh/vultr_anuzoo root@YOUR_SERVER_IP
Option B: Use Existing SSH Key
If you already have an SSH key:
Display your existing public key
cat ~/.ssh/id_rsa.pub
OR
cat ~/.ssh/id_ed25519.pub
Add it to Vultr following the same steps above.
---
Part 4: Domain Configuration
Step 1: Add DNS Records
In your domain registrar (GoDaddy, Namecheap, Cloudflare, etc.):
A Record (Main Domain)
Type: A
Name: @
Value: YOUR_VULTR_SERVER_IP
TTL: 600 (or Auto)
A Record (WWW Subdomain)
Type: A
Name: www
Value: YOUR_VULTR_SERVER_IP
TTL: 600
A Record (API Subdomain)
Type: A
Name: api
Value: YOUR_VULTR_SERVER_IP
TTL: 600
Example Configuration:
anuzoo.com A YOUR_SERVER_IP
www.anuzoo.com A YOUR_SERVER_IP
api.anuzoo.com A YOUR_SERVER_IP
Step 2: Wait for DNS Propagation
- DNS changes take 5 minutes to 24 hours to propagate
- Test with:
nslookup yourdomain.com - Or use: https://dnschecker.org
---
Part 5: Firewall Configuration
Option A: Use Vultr Firewall (Recommended)
Create Firewall Group:
1. Go to Network → Firewall
2. Click Add Firewall Group
3. Name: Anuzoo Production
Add Rules:
| Direction | Protocol | Port Range | Source | Description | |-----------|----------|------------|--------|-------------| | Inbound | TCP | 22 | 0.0.0.0/0 | SSH (will restrict later) | | Inbound | TCP | 80 | 0.0.0.0/0 | HTTP | | Inbound | TCP | 443 | 0.0.0.0/0 | HTTPS | | Inbound | ICMP | - | 0.0.0.0/0 | Ping | | Outbound | All | All | 0.0.0.0/0 | All outbound traffic |
Attach to Server: 1. Click Manage on firewall group 2. Click Linked Instances 3. Select your server 4. Click Add Instance
Option B: Use UFW (On-Server Firewall)
We'll configure this during deployment in the migration guide.
---
Part 6: Get Required API Keys
Before deployment, gather these credentials:
Stripe (Payment Processing)
1. Go to https://dashboard.stripe.com 2. Create account or log in 3. Navigate to Developers → API Keys 4. Copy:- Publishable key (starts with
pk_live_) - Secret key (starts with
sk_live_)
https://api.yourdomain.com/webhooks/stripe
8. Events to send: Select all payment and subscription events
9. Copy Signing secret (starts with whsec_)Pinecone (Vector Database for AI Matching)
1. Go to https://app.pinecone.io 2. Create account or log in 3. Click API Keys in left sidebar 4. Copy API Key 5. Note Environment (e.g.,us-west1-gcp)
6. Create index:
- Name:
anuzoo-embeddings - Dimensions:
512(for CLIP embeddings) - Metric:
cosine - Pod Type:
p1.x1(starter)
OpenAI (For Embeddings)
1. Go to https://platform.openai.com 2. Create account or log in 3. Navigate to API Keys 4. Click Create new secret key 5. Name:Anuzoo Production
6. Copy key (starts with sk-)
7. Note: This is shown only once!SendGrid (Email Service)
1. Go to https://app.sendgrid.com 2. Create account or log in 3. Navigate to Settings → API Keys 4. Click Create API Key 5. Name:Anuzoo Production
6. Permissions: Full Access
7. Copy API key (starts with SG.)
8. Set up sender authentication:
- Go to Settings → Sender Authentication
- Follow domain authentication steps
- Add DNS records to your domain
Generate Security Secrets
On your local machine:
Generate SECRET_KEY
openssl rand -hex 32Generate JWT_SECRET_KEY
openssl rand -hex 32Generate SESSION_SECRET_KEY
openssl rand -hex 32
Copy all three keys - you'll need them for .env.production
---
Part 7: Pre-Deployment Checklist
Before running deployment, ensure you have:
- ✅ Vultr server provisioned and running
- ✅ Server IP address noted
- ✅ SSH key access configured and tested
- ✅ Domain DNS records configured (A records)
- ✅ Firewall rules configured
- ✅ Stripe API keys (publishable, secret, webhook secret)
- ✅ Pinecone API key and index created
- ✅ OpenAI API key
- ✅ SendGrid API key and sender domain verified
- ✅ Security secrets generated (SECRET_KEY, JWT_SECRET_KEY, SESSION_SECRET_KEY)
Save all credentials securely - you'll need them when configuring .env.production
---
Part 8: Cost Estimation
Monthly Recurring Costs
Phase 1 (Production):
- Cloud Compute (6 vCPU, 16GB): $96.00
- Auto Backups (10%): $9.60
- Total: ~$105.60/month
Additional Costs:
- Bandwidth overage: $0.01/GB (after 5TB free)
- Snapshots: $0.05/GB/month (optional)
External Services:
- Stripe: 2.9% + $0.30 per transaction (no monthly fee)
- Pinecone: Free tier (100K vectors), then $70/month for 1M vectors
- OpenAI: Pay-per-use (~$0.0001 per embedding)
- SendGrid: Free tier (100 emails/day), then $19.95/month (40K emails)
Bandwidth Usage Estimate
For 20,000 active users:
- Profile images: 100MB
- Event images: 200MB
- Messages/data: 100MB
- Video thumbnails: 100MB
- AI model requests: 50MB
- Total per month: ~1.5TB (well within 5TB limit)
With Cloudflare CDN (recommended):
- Reduces bandwidth by 80-90%
- Anuzoo server: ~300GB/month
- Cloudflare handles: ~1.2TB/month (free)
---
Next Steps
Once you have: 1. Server provisioned 2. SSH access working 3. All API keys gathered 4. DNS configured
Proceed to: VULTR_MIGRATION_PLAN.md → Phase 1 deployment
---
Troubleshooting
Can't SSH into server
Check if server is running
In Vultr dashboard, verify server status is "Running"
Test SSH connectivity
ssh -v root@YOUR_SERVER_IPIf using SSH key:
ssh -i ~/.ssh/vultr_anuzoo root@YOUR_SERVER_IPIf connection refused, check firewall allows port 22
DNS not resolving
Test DNS
nslookup yourdomain.comCheck DNS propagation
Visit: https://dnschecker.org
If not working after 24 hours, verify:
- A records are correct
- DNS servers are set to your registrar's defaults
Forgot root password
1. In Vultr dashboard, click server name 2. Go to Settings tab 3. Click Server Password 4. Click Reset Root Password 5. New password will be displayedNeed to resize server
1. Server must be stopped first 2. Click Settings → Resize 3. Choose larger plan 4. Confirm resize 5. Start server 6. Note: Cannot downgrade, only upgrade---
Support Resources
- Vultr Documentation: https://docs.vultr.com
- Vultr Support: https://my.vultr.com/support/
- Community: https://discord.gg/vultr (unofficial)
- Status Page: https://status.vultr.com
Response Times:
- Critical issues: 1-4 hours
- High priority: 4-12 hours
- Normal: 12-24 hours
Vultr Migration Plan for Anuzoo
Created: 2025-12-14 Updated: 2025-12-14 Target: Vultr Cloud Infrastructure Recommended: 2-Server Architecture for MVP Production
---
🎯 Migration Strategy Overview
Recommended Path: 2-Server Architecture (MVP Production)
Phase 1: 2-Server Deploy - $158.40/month ⭐ RECOMMENDED FOR MVP
- Application Server (6 vCPU, 16GB RAM) + Database Server (4 vCPU, 8GB RAM)
- Production-ready for 30K-80K users
- Dedicated database resources for better performance
- Independent scaling capabilities
- Private network connectivity (10 Gbps, free)
- Goal: Production-ready, scalable infrastructure
Phase 2: Scale When Needed (80K+ Users)
- Add more application servers or upgrade database
- Add load balancing
- Enable high availability
- Goal: Handle growth efficiently
Alternative Path: Single Server (Testing/Budget-Constrained)
Phase 1: Single Server - $105.60/month
- Single Vultr server (6 vCPU, 16GB RAM)
- Good for testing and initial launch (20K-50K users)
- All services on one server
- Goal: Lowest cost entry point
- Migration Path: Use
MIGRATION_SINGLE_TO_MULTI.mdto upgrade to 2-server
---
📋 Deployment Options Quick Reference
| Option | Cost | Users | Use Case | Guide |
|--------|------|-------|----------|-------|
| 2-Server ⭐ | $158.40/mo | 30K-80K | MVP Production | MULTI_SERVER_SETUP.md |
| Single Server | $105.60/mo | 20K-50K | Testing/Budget | See Phase 1 below |
| 3+ Servers | $343+/mo | 80K+ | High traffic | DEPLOYMENT_OPTIONS.md |
For complete comparison: See DEPLOYMENT_OPTIONS.md
---
🚀 RECOMMENDED: Phase 1 - 2-Server Architecture ($158.40/mo)
Why 2-Server for MVP?
✅ Better Performance: Database has dedicated CPU/RAM ✅ Independent Scaling: Upgrade app or DB separately ✅ Production-Ready: Supports 30K-80K users ✅ Room to Grow: No architecture changes needed until 80K+ users ✅ Better Value: $52.80/month more than single server, but much better performance ✅ Private Network: Fast, secure, free 10 Gbps connection
Setup Guide
Complete setup guide: See MULTI_SERVER_SETUP.md
Quick overview:
1. Provision Database Server:
- 4 vCPU, 8GB RAM, 160GB SSD ($48/mo)
- Enable private networking
- Enable auto backups ($4.80/mo)
2. Provision Application Server:
- 6 vCPU, 16GB RAM, 320GB SSD ($96/mo)
- Enable private networking
- Enable auto backups ($9.60/mo)
3. Deploy Services:
- Database server: PostgreSQL only
- Application server: API, Redis, AI models
4. Configure Private Network:
- Verify both servers can communicate
- Update DATABASE_URL to use private IP
- Test connectivity
Total Time: 2-3 hours Downtime: None (new deployment)
---
🔧 ALTERNATIVE: Single Server Deployment ($105.60/mo)
Use this option if:
- Budget-constrained launch
- Testing/proof of concept
- Expected users < 20K in first 6 months
- Planning to migrate to 2-server later
Migration to 2-Server: See MIGRATION_SINGLE_TO_MULTI.md
---
📋 Pre-Deployment Checklist (All Options)
Account Setup
- [ ] Create Vultr account
- [ ] Add payment method
- [ ] Generate SSH key pair
- [ ] Save API credentials
Local Preparation
- [ ] Backup current database
- [ ] Export environment variables
- [ ] Test Docker builds locally
- [ ] Document current configurations
Domain & DNS
- [ ] Purchase/configure domain (if needed)
- [ ] Set up Cloudflare account (free tier)
- [ ] Prepare DNS records
---
🚀 Phase 1: Production Deploy (Option 1 - $105.60/mo)
Step 1: Create Vultr Server
Server Configuration:
- Type: Cloud Compute - High Performance AMD
- CPU: 6 vCPU AMD EPYC
- RAM: 16GB
- Storage: 320GB NVMe SSD
- Bandwidth: 5TB/month included
- Location: Ashburn, VA (us-east) or nearest datacenter
- OS: Ubuntu 22.04 LTS
- Monthly Cost: $96/month
- Auto Backups: $9.60/month
- Total: $105.60/month
Setup Commands:
SSH into server
ssh root@YOUR_SERVER_IPUpdate system
apt update && apt upgrade -yInstall Docker
curl -fsSL https://get.docker.com | sh
systemctl enable docker
systemctl start dockerInstall Docker Compose
apt install -y docker-compose-pluginInstall additional tools
apt install -y git nginx certbot python3-certbot-nginx ufw fail2ban
Step 2: Configure Firewall
Configure UFW firewall
ufw default deny incoming
ufw default allow outgoing
ufw allow ssh
ufw allow 80/tcp
ufw allow 443/tcp
ufw enableVerify
ufw status
Step 3: Clone Repository
Create app directory
mkdir -p /var/www/anuzoo
cd /var/www/anuzooClone your repository (replace with your repo URL)
git clone https://github.com/YOUR_USERNAME/anuzoo.git .Or upload files via SCP
scp -r /local/path/anuzoo root@YOUR_SERVER_IP:/var/www/anuzoo
Step 4: Configure Environment
Create production .env file
nano /var/www/anuzoo/.env.production
Required Environment Variables:
Database
DATABASE_URL=postgresql+asyncpg://anuzoo_user:STRONG_PASSWORD@localhost:5432/anuzoo_prod
POSTGRES_USER=anuzoo_user
POSTGRES_PASSWORD=STRONG_PASSWORD_HERE
POSTGRES_DB=anuzoo_prodRedis
REDIS_URL=redis://localhost:6379/0API
API_HOST=0.0.0.0
API_PORT=8001
API_WORKERS=4
API_RELOAD=false
ENVIRONMENT=productionSecurity
SECRET_KEY=GENERATE_STRONG_SECRET_KEY_HERE
JWT_SECRET_KEY=GENERATE_ANOTHER_STRONG_KEY_HERE
ALLOWED_ORIGINS=https://yourdomain.com,https://www.yourdomain.comStripe (from your Stripe dashboard)
STRIPE_SECRET_KEY=sk_live_YOUR_STRIPE_KEY
STRIPE_PUBLISHABLE_KEY=pk_live_YOUR_STRIPE_KEY
STRIPE_WEBHOOK_SECRET=whsec_YOUR_WEBHOOK_SECRETPinecone (for AI matching)
PINECONE_API_KEY=YOUR_PINECONE_KEY
PINECONE_ENVIRONMENT=YOUR_PINECONE_ENV
PINECONE_INDEX_NAME=anuzoo-embeddingsOpenAI (for embeddings)
OPENAI_API_KEY=sk-YOUR_OPENAI_KEYEmail (optional - for notifications)
SMTP_HOST=smtp.sendgrid.net
SMTP_PORT=587
SMTP_USER=apikey
SMTP_PASSWORD=YOUR_SENDGRID_API_KEY
SMTP_FROM_EMAIL=noreply@yourdomain.comFrontend URL
FRONTEND_URL=https://yourdomain.com
Generate Secrets:
Generate SECRET_KEY
openssl rand -hex 32Generate JWT_SECRET_KEY
openssl rand -hex 32
Step 5: Create Production Docker Compose
Create /var/www/anuzoo/docker-compose.production.yml:
version: '3.8'services:
# PostgreSQL Database
db:
image: postgres:15-alpine
container_name: anuzoo_db
restart: always
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
volumes:
- postgres_data:/var/lib/postgresql/data
ports:
- "127.0.0.1:5432:5432"
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER}"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
memory: 2G
reservations:
memory: 1G # Redis Cache
redis:
image: redis:7-alpine
container_name: anuzoo_redis
restart: always
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes --maxmemory 512mb --maxmemory-policy allkeys-lru
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 10s
timeout: 5s
retries: 5
deploy:
resources:
limits:
memory: 512M # FastAPI Backend
api:
build:
context: ./apps/api
dockerfile: Dockerfile.production
container_name: anuzoo_api
restart: always
env_file:
- .env.production
ports:
- "127.0.0.1:8001:8001"
depends_on:
db:
condition: service_healthy
redis:
condition: service_healthy
volumes:
- ./apps/api:/app
- ai_models:/app/models # AI model cache
command: uvicorn app.main:app --host 0.0.0.0 --port 8001 --workers 4
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8001/health"]
interval: 30s
timeout: 10s
retries: 3
deploy:
resources:
limits:
memory: 5G # API + AI models
reservations:
memory: 3Gvolumes:
postgres_data:
driver: local
redis_data:
driver: local
ai_models:
driver: local
Step 6: Create Production Dockerfile for API
Create /var/www/anuzoo/apps/api/Dockerfile.production:
FROM python:3.11-slimSet working directory
WORKDIR /appInstall system dependencies
RUN apt-get update && apt-get install -y \
gcc \
g++ \
postgresql-client \
curl \
&& rm -rf /var/lib/apt/lists/*Copy requirements
COPY requirements.txt .Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txtInstall PyTorch CPU version (for AI models)
RUN pip install --no-cache-dir torch torchvision --index-url https://download.pytorch.org/whl/cpuCopy application
COPY . .Create non-root user
RUN useradd -m -u 1000 anuzoo && chown -R anuzoo:anuzoo /app
USER anuzooExpose port
EXPOSE 8001Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=40s --retries=3 \
CMD curl -f http://localhost:8001/health || exit 1Run migrations and start server
CMD alembic upgrade head && uvicorn app.main:app --host 0.0.0.0 --port 8001 --workers 4
Step 7: Build and Start Services
cd /var/www/anuzooBuild images
docker compose -f docker-compose.production.yml buildStart services
docker compose -f docker-compose.production.yml up -dCheck logs
docker compose -f docker-compose.production.yml logs -f apiVerify services are running
docker compose -f docker-compose.production.yml ps
Step 8: Configure NGINX Reverse Proxy
Create /etc/nginx/sites-available/anuzoo:
Rate limiting
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
limit_req_zone $binary_remote_addr zone=general_limit:10m rate=30r/s;Upstream API
upstream api_backend {
server 127.0.0.1:8001;
keepalive 64;
}server {
listen 80;
listen [::]:80;
server_name yourdomain.com www.yourdomain.com;
# Redirect to HTTPS
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name yourdomain.com www.yourdomain.com;
# SSL certificates (will be configured by certbot)
ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
# SSL configuration
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;
# Security headers
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Content-Type-Options "nosniff" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
# Max upload size (for images)
client_max_body_size 10M;
# Frontend static files
root /var/www/anuzoo/dist;
index index.html;
# API proxy
location /api/ {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Timeouts
proxy_connect_timeout 60s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
}
# API docs
location /docs {
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# WebSocket support
location /ws/ {
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 86400;
}
# Frontend SPA routing
location / {
limit_req zone=general_limit burst=50 nodelay;
try_files $uri $uri/ /index.html;
}
# Static assets caching
location ~* \.(jpg|jpeg|png|gif|ico|css|js|svg|woff|woff2|ttf|eot)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
}
Enable site:
Link site configuration
ln -s /etc/nginx/sites-available/anuzoo /etc/nginx/sites-enabled/Test configuration
nginx -tReload NGINX
systemctl reload nginx
Step 9: SSL Certificate with Let's Encrypt
Get SSL certificate
certbot --nginx -d yourdomain.com -d www.yourdomain.comFollow prompts and choose:
- Enter email for renewal notifications
- Agree to terms
- Redirect HTTP to HTTPS: Yes
Test auto-renewal
certbot renew --dry-run
Step 10: Build and Deploy Frontend
cd /var/www/anuzooInstall Node.js
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt install -y nodejsBuild frontend
npm install
npm run buildFrontend will be in /var/www/anuzoo/dist
NGINX will serve it automatically
Step 11: Database Migration
If migrating from existing database, export data:
(Run this on your OLD server/local machine)
pg_dump -U postgres -d anuzoo > anuzoo_backup.sqlTransfer to Vultr
scp anuzoo_backup.sql root@YOUR_VULTR_IP:/tmp/On Vultr server, import:
docker compose -f docker-compose.production.yml exec -T db psql -U anuzoo_user -d anuzoo_prod < /tmp/anuzoo_backup.sqlOr run fresh migrations:
docker compose -f docker-compose.production.yml exec api alembic upgrade head
Step 12: Verify Deployment
Check all containers are running
docker compose -f docker-compose.production.yml psCheck API health
curl https://yourdomain.com/api/healthCheck API docs
curl https://yourdomain.com/docsCheck logs
docker compose -f docker-compose.production.yml logs -f
Step 13: Set Up Monitoring
Create /var/www/anuzoo/monitoring.sh:
#!/bin/bashSimple monitoring script
LOG_FILE="/var/log/anuzoo_monitor.log"echo "[$(date)] Checking services..." >> $LOG_FILE
Check Docker containers
docker compose -f /var/www/anuzoo/docker-compose.production.yml ps >> $LOG_FILECheck disk space
df -h / >> $LOG_FILECheck memory
free -h >> $LOG_FILECheck API health
curl -f https://yourdomain.com/api/health >> $LOG_FILE 2>&1 || echo "API DOWN!" >> $LOG_FILEecho "---" >> $LOG_FILE
Make executable and add to cron:
chmod +x /var/www/anuzoo/monitoring.shAdd to crontab (run every 5 minutes)
crontab -e
Add line:
/5 * /var/www/anuzoo/monitoring.sh
Step 14: Backup Script
Create /var/www/anuzoo/backup.sh:
#!/bin/bashBACKUP_DIR="/var/backups/anuzoo"
DATE=$(date +%Y%m%d_%H%M%S)
mkdir -p $BACKUP_DIR
Backup database
docker compose -f /var/www/anuzoo/docker-compose.production.yml exec -T db \
pg_dump -U anuzoo_user anuzoo_prod | gzip > $BACKUP_DIR/db_$DATE.sql.gzBackup uploaded files (if any)
tar -czf $BACKUP_DIR/files_$DATE.tar.gz /var/www/anuzoo/uploads 2>/dev/null || trueKeep only last 7 days of backups
find $BACKUP_DIR -name "*.gz" -mtime +7 -deleteecho "Backup completed: $DATE"
Make executable and add to cron:
chmod +x /var/www/anuzoo/backup.shAdd to crontab (daily at 2 AM)
crontab -e
Add line:
0 2 * /var/www/anuzoo/backup.sh
---
✅ Phase 1 Completion Checklist
- [ ] Vultr server created and accessible
- [ ] Docker and Docker Compose installed
- [ ] Firewall configured
- [ ] Repository cloned
- [ ] Environment variables set
- [ ] Docker containers running
- [ ] NGINX configured
- [ ] SSL certificate installed
- [ ] Frontend built and deployed
- [ ] Database migrated
- [ ] API responding to requests
- [ ] Monitoring script running
- [ ] Backup script scheduled
---
📊 Cost Breakdown by Deployment Option
2-Server Architecture (RECOMMENDED FOR MVP) ⭐
- Application Server: $96/month
- Application Backups: $9.60/month
- Database Server: $48/month
- Database Backups: $4.80/month
- Total: $158.40/month
- Capacity: 30,000-80,000 users
- User cost: $0.0020-0.0053 per user
Single Server (Testing/Budget-Constrained)
- Server: $96/month
- Auto Backups: $9.60/month
- Total: $105.60/month
- Capacity: 20,000-50,000 users
- User cost: $0.0021-0.0053 per user
3-Server and Beyond
- 3-Server (Frontend + API + DB): $343.20/month (80K-150K users)
- 4-Server (Full separation): $564.40/month (100K-250K users)
- 5-Server HA: $829.40/month (250K-500K users)
See DEPLOYMENT_OPTIONS.md for complete cost comparison
Annual Cost Comparison
| Option | Monthly | Annual | Per User (avg) | |--------|---------|--------|----------------| | Single Server | $105.60 | $1,267.20 | $0.0037 | | 2-Server ⭐ | $158.40 | $1,900.80 | $0.0028 | | 3-Server | $343.20 | $4,118.40 | $0.0029 |
Cost vs Previous Plan
Previous plan: 8 vCPU, 32GB RAM single server = $211.20/month New 2-server plan: $158.40/month
Monthly savings: $52.80 Annual savings: $633.60 Better performance: ✅ Dedicated database resources
---
🔧 Troubleshooting
Container won't start
Check logs
docker compose -f docker-compose.production.yml logs SERVICE_NAMERebuild
docker compose -f docker-compose.production.yml up -d --build SERVICE_NAME
Database connection issues
Check database is running
docker compose -f docker-compose.production.yml exec db psql -U anuzoo_user -d anuzoo_prodCheck connection from API
docker compose -f docker-compose.production.yml exec api python -c "from app.core.database import engine; print('OK')"
Out of memory
Check memory usage
free -h
docker statsRestart services
docker compose -f docker-compose.production.yml restart
High CPU usage
Check which container
docker statsCheck processes inside container
docker compose -f docker-compose.production.yml exec api top
---
📞 Support Resources
- Vultr Support: https://my.vultr.com/support/
- Vultr Docs: https://docs.vultr.com/
- Docker Docs: https://docs.docker.com/
- NGINX Docs: https://nginx.org/en/docs/
---
Ready to deploy? Start with Phase 1 and let me know if you need help with any step!
Migration Guide: Single Server → 2-Server Setup
Estimated Downtime: 30-60 minutes Difficulty: Moderate Data Loss Risk: None (if followed correctly)
---
📋 Pre-Migration Checklist
Before You Begin
- [ ] Current single-server deployment is stable
- [ ] Full database backup created and downloaded
- [ ] Database backup tested (can restore successfully)
- [ ] Maintenance window scheduled (low-traffic period)
- [ ] Team notified of maintenance window
- [ ] DNS TTL lowered to 300 seconds (5 minutes) - 24 hours before migration
What You'll Need
- [ ] Vultr account with payment method
- [ ] Second server provisioned (4 vCPU, 8GB RAM, $48/mo)
- [ ] Private networking enabled on both servers
- [ ] Both servers in same datacenter
---
🎯 Migration Strategy
We'll use a Blue-Green Deployment approach:
1. Blue (Current): Single server keeps running 2. Green (New): Set up 2-server architecture in parallel 3. Cutover: Switch DNS to new setup 4. Verify: Test everything works 5. Decommission: Destroy old single server
Benefit: Zero-risk rollback if issues occur
---
📊 Migration Timeline
| Phase | Duration | Downtime | Description | |-------|----------|----------|-------------| | Phase 1: Prepare | 1 hour | None | Provision new servers, verify access | | Phase 2: Deploy | 1 hour | None | Install services on new servers | | Phase 3: Migrate Data | 30 min | None | Copy database to new DB server | | Phase 4: Cutover | 15 min | Yes | Switch DNS and traffic | | Phase 5: Verify | 30 min | None | Test and monitor | | Total | ~3 hours | 15-30 min | |
---
🚀 Phase 1: Prepare New Servers (1 hour, No Downtime)
Step 1.1: Provision Database Server
In Vultr Dashboard:
1. Deploy new server
2. IMPORTANT: Choose SAME DATACENTER as current server
3. Type: Cloud Compute
4. Plan: High Performance AMD
5. Size: 4 vCPU, 8GB RAM, 160GB SSD ($48/mo)
6. Enable Private Networking ✓
7. Enable Auto Backups ✓
8. OS: Ubuntu 22.04 LTS
9. Hostname: anuzoo-db-01
10. Deploy
Note the IPs:
- Public IP:
_____________(for SSH) - Private IP:
_____________(for PostgreSQL)
Step 1.2: SSH into New Database Server
SSH into new database server
ssh root@NEW_DB_SERVER_PUBLIC_IPUpdate system
apt-get update && apt-get upgrade -yInstall Docker
curl -fsSL https://get.docker.com | shInstall tools
apt-get install -y git vim htopClone repository
cd /root
git clone https://github.com/yourusername/anuzoo.git
cd anuzoo
Step 1.3: Verify Private Network
On current server:
Check if private networking enabled
ip addr show | grep "10\."If not showing, may need to enable in Vultr settings
Test connectivity between servers:
From current server, ping new DB server
ping -c 3 NEW_DB_SERVER_PRIVATE_IPShould get replies
---
🗄️ Phase 2: Deploy Database on New Server (1 hour, No Downtime)
Step 2.1: Configure Database Server
On new database server:
cd /root/anuzooCopy environment file
cp .env.production.example .env.productionEdit configuration
nano .env.production
Set these values (copy from current server):
POSTGRES_USER=anuzoo_user
POSTGRES_PASSWORD=
POSTGRES_DB=anuzoo_prodPerformance tuning for 8GB RAM
POSTGRES_SHARED_BUFFERS=2GB
POSTGRES_EFFECTIVE_CACHE_SIZE=6GB
Step 2.2: Create Data Directory
mkdir -p /var/lib/anuzoo/postgres
chmod 700 /var/lib/anuzoo/postgres
Step 2.3: Start PostgreSQL
Mark as database server
echo "db" > /etc/anuzoo-server-typeMake script executable
chmod +x scripts/deploy-multi-server.shStart PostgreSQL
./scripts/deploy-multi-server.sh db startWait for startup
sleep 30Verify health
./scripts/deploy-multi-server.sh db health
Expected: ✓ PostgreSQL is healthy
---
📦 Phase 3: Migrate Database (30 minutes, No Downtime)
Step 3.1: Backup Current Database
On current single server:
cd /root/anuzooCreate backup
./scripts/deploy.sh backupVerify backup created
ls -lh /var/backups/anuzoo/Should see: db_YYYYMMDD_HHMMSS.sql.gz
Step 3.2: Transfer Backup to New DB Server
On current server:
Copy backup to new DB server
scp /var/backups/anuzoo/db_*.sql.gz root@NEW_DB_SERVER_PUBLIC_IP:/tmp/
Step 3.3: Restore to New Database
On new database server:
Find the backup file
ls -lh /tmp/db_*.sql.gzRestore database
gunzip -c /tmp/db_*.sql.gz | \
docker compose -f docker-compose.db-server.yml exec -T db \
psql -U anuzoo_user -d anuzoo_prodVerify data migrated
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c "\dt"Check row counts
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SELECT COUNT(*) FROM users;"
Verify: Row counts match your current database
---
🔄 Phase 4: Reconfigure Current Server as App-Only (No Downtime)
Step 4.1: Update Environment Variables
On current server:
cd /root/anuzoo
nano .env.production
Update database connection:
OLD (local database):
DATABASE_URL=postgresql+asyncpg://anuzoo_user:password@db:5432/anuzoo_prod
NEW (remote database via private network):
DATABASE_URL=postgresql+asyncpg://anuzoo_user:password@NEW_DB_PRIVATE_IP:5432/anuzoo_prod
DB_SERVER_PRIVATE_IP=NEW_DB_PRIVATE_IP
Step 4.2: Test Database Connection
On current server:
Install netcat if needed
apt-get install -y netcatTest connection to new database
nc -zv $DB_SERVER_PRIVATE_IP 5432Should show: Connection succeeded
Step 4.3: Download New Docker Compose Files
On current server:
cd /root/anuzooPull latest changes (includes new docker-compose files)
git pull origin mainVerify new files exist
ls -l docker-compose.app-server.yml
---
🎬 Phase 5: Cutover (15-30 minutes, DOWNTIME BEGINS)
Step 5.1: Announce Maintenance
Post to users:
"Anuzoo will be undergoing brief maintenance for 15-30 minutes
starting at [TIME]. We're upgrading our infrastructure for better
performance. Thank you for your patience!"
Step 5.2: Stop Current Services
On current server:
Stop old single-server setup
./scripts/deploy.sh stopVerify stopped
docker psShould show no containers running
Step 5.3: Switch to Multi-Server Configuration
On current server (now app-only):
Mark as app server
echo "app" > /etc/anuzoo-server-typeStart with new app-server configuration
./scripts/deploy-multi-server.sh app startWait for startup
sleep 60Test database connection
./scripts/deploy-multi-server.sh app test-networkShould show: ✓ Can connect to PostgreSQL port 5432
Check health
./scripts/deploy-multi-server.sh app healthShould show:
✓ Redis is healthy
✓ API is healthy
Step 5.4: Verify Application
Test API
curl http://localhost:8001/healthShould return: {"status":"healthy"}
Test database query
curl http://localhost:8001/api/v1/users/me \
-H "Authorization: Bearer YOUR_TEST_TOKEN"Should return user data
Step 5.5: Update NGINX (if needed)
On current server:
NGINX configuration should still work
Just verify it's running
systemctl status nginxRestart if needed
systemctl restart nginx
---
✅ Phase 6: Verify Everything Works (30 minutes)
Step 6.1: Run Full Health Checks
On app server (current server):
./scripts/monitor.sh check
On database server:
./scripts/monitor.sh check
Step 6.2: Test Core Features
1. Visit website: https://yourdomain.com 2. Login: Test user authentication 3. Create post: Test database writes 4. Upload image: Test file storage 5. View feed: Test database reads 6. Send message: Test real-time features
Step 6.3: Monitor Logs
On app server:
Watch API logs
./scripts/deploy-multi-server.sh app logs apiLook for:
- No connection errors
- Normal request processing
- No 500 errors
On database server:
Watch database logs
./scripts/deploy-multi-server.sh db logsLook for:
- Successful connections
- No authentication errors
- Normal query processing
Step 6.4: Check Performance
Test database query speed:
On app server
time curl http://localhost:8001/api/v1/events/Should be fast (< 200ms)
Monitor resource usage:
On app server
./scripts/deploy-multi-server.sh app statusOn database server
./scripts/deploy-multi-server.sh db status
---
🎉 Phase 7: Announce Success
Post to users:
"Maintenance complete! Anuzoo is back online with improved
performance. Thanks for your patience!"
---
🔙 Rollback Plan (If Issues Occur)
If something goes wrong during cutover:
Quick Rollback (< 5 minutes)
On current server:
Stop new configuration
./scripts/deploy-multi-server.sh app stopSwitch back to single-server setup
docker compose -f docker-compose.production.yml up -dVerify
curl http://localhost:8001/health
Everything should work as before.
After Rollback
1. Investigate the issue 2. Fix configuration 3. Schedule new migration window 4. Try again
---
🗑️ Phase 8: Cleanup (After 7 Days)
Only after 7 days of stable operation:
Step 8.1: Remove Old Database from App Server
On app server:
Remove old database volume (if still exists)
docker volume rm anuzoo_postgres_dataThis frees up ~10-50GB
Step 8.2: Verify Backups
On database server
ls -lh /var/backups/anuzoo/Should see daily backups
Step 8.3: Update Documentation
Update your internal docs with:
- New server IPs
- New architecture diagram
- New backup procedures
---
📊 Post-Migration Monitoring (First 48 Hours)
Monitor These Metrics
Application Server:
Check every 4 hours
./scripts/monitor.sh check
Watch for:
- CPU usage < 70%
- Memory usage < 75%
- No API errors
- Fast response times
Database Server:
Check every 4 hours
./scripts/monitor.sh check
Watch for:
- Active connections < 100
- No slow queries
- Replication lag = 0
- Memory usage < 70%
Database Connection Monitoring
On database server:
Check connections every hour
watch -n 3600 'docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SELECT count(*), state FROM pg_stat_activity GROUP BY state;"'
Normal output:
count | state
-------+--------
5 | active
10 | idle
---
💰 Cost Impact
Before Migration (Single Server):
- Server: $105.60/month
- Total: $105.60/month
After Migration (2 Servers):
- Application Server: $96/month + $9.60 backups
- Database Server: $48/month + $4.80 backups
- Total: $158.40/month
Increase: $52.80/month ($633.60/year)
Benefits for the cost:
- ✅ Better performance (isolated database)
- ✅ Independent scaling
- ✅ Less risk of one service affecting another
- ✅ Easier to troubleshoot issues
- ✅ Ready to scale to 80K users
---
🎯 Success Criteria
Migration is successful when:
- ✅ Both servers healthy
- ✅ Application accessible at https://yourdomain.com
- ✅ Users can login and use all features
- ✅ Database queries working normally
- ✅ No errors in logs
- ✅ Private network connection stable
- ✅ Backups running on database server
- ✅ Monitoring active on both servers
- ✅ Response times same or better than before
- ✅ No user complaints
---
📞 Troubleshooting Common Issues
Issue: API Can't Connect to Database
Symptoms: 500 errors, "connection refused" in logs
Fix:
On app server
Verify environment variable set correctly
echo $DB_SERVER_PRIVATE_IPTest network connectivity
ping -c 3 $DB_SERVER_PRIVATE_IP
nc -zv $DB_SERVER_PRIVATE_IP 5432If connection fails, check:
1. Firewall allows port 5432 from private network
2. PostgreSQL listening on 0.0.0.0
3. Both servers in same datacenter
Issue: Slow Database Queries
Symptoms: API timeouts, slow page loads
Fix:
On database server
Check if PostgreSQL using performance settings
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c "SHOW shared_buffers;"Should show: 2GB
Check connections
docker compose -f docker-compose.db-server.yml exec db \
psql -U anuzoo_user -d anuzoo_prod -c \
"SELECT count(*) FROM pg_stat_activity;"If > 100, may need to increase max_connections
Issue: High Memory Usage on App Server
Symptoms: Out of memory errors, API crashes
Fix:
Check which service using memory
docker stats --no-streamIf API using too much:
1. Check for memory leaks in application
2. Reduce number of workers in docker-compose.app-server.yml
3. Consider upgrading to 8 vCPU, 32GB RAM plan
---
📝 Checklist Summary
Print this page and check off as you go:
Pre-Migration
- [ ] Database backup created and tested
- [ ] Maintenance window scheduled
- [ ] Team notified
- [ ] New database server provisioned
- [ ] Private networking verified
Migration
- [ ] Phase 1: New servers prepared
- [ ] Phase 2: Database deployed on new server
- [ ] Phase 3: Data migrated to new server
- [ ] Phase 4: Current server reconfigured
- [ ] Phase 5: Cutover completed
- [ ] Phase 6: Verification passed
Post-Migration
- [ ] Monitoring active on both servers
- [ ] Backups verified
- [ ] Documentation updated
- [ ] Team notified of success
- [ ] Users notified
Congratulations on your successful migration! 🎉
Anuzoo Monitoring & Alerting Guide
Complete guide to monitoring your Anuzoo production deployment on Vultr.
---
Table of Contents
1. Overview 2. Installation 3. Monitoring Components 4. Commands 5. Metrics Explained 6. Alert Thresholds 7. Log Files 8. Troubleshooting 9. Best Practices
---
Overview
The Anuzoo monitoring system provides:
- Real-time health checks - Every 5 minutes
- Automated alerts - Email notifications when issues detected
- Performance metrics - CPU, memory, disk usage tracking
- Service monitoring - Database, Redis, API health checks
- Daily reports - Comprehensive system status
- Automated backups - Database backups every 6 hours
---
Installation
Step 1: Install Monitoring System
SSH into your Vultr server
ssh root@YOUR_SERVER_IPNavigate to project directory
cd /root/anuzooRun installation script (with optional alert email)
sudo ./scripts/install-monitoring.sh your-email@example.com
Step 2: Verify Installation
Check cron jobs are configured
cat /etc/cron.d/anuzoo-monitoringView monitoring status
anuzoo-statusRun manual health check
./scripts/monitor.sh check
Step 3: Test Email Alerts
Send test email
echo "Test alert" | mail -s "Anuzoo Test" your-email@example.comTrigger a test alert (simulate high CPU)
./scripts/monitor.sh alert
---
Monitoring Components
1. Health Checks
Frequency: Every 5 minutes
What's Checked:
- Service status (Database, Redis, API)
- System resources (CPU, memory, disk)
- API endpoint responsiveness
- Database connectivity
- Redis connectivity
- SSL certificate expiration
Command:
./scripts/monitor.sh check
Output Example:
=========================================
Anuzoo System Health Check
2025-12-14 18:30:00
=========================================Service Health:
✓ db is running
✓ redis is running
✓ api is running
System Resources:
✓ CPU usage: 45%
✓ Memory usage: 62%
✓ Disk usage: 38%
Application Health:
✓ API endpoint responding
✓ Database is accepting connections
✓ Redis is responding
SSL Certificate:
✓ SSL certificate valid for 87 days
All checks passed!
2. Alert System
Frequency: Every 15 minutes
Alert Triggers:
- Service down or unhealthy
- CPU usage > 75%
- Memory usage > 80%
- Disk usage > 85%
- API not responding
- SSL certificate expires in < 30 days
Email Format:
Subject: Anuzoo Alert: System Issues DetectedHigh CPU usage: 85%
High memory usage: 90%
API endpoint not responding.
3. Metrics Logging
Frequency: Every 5 minutes
Metrics Tracked:
- CPU usage percentage
- Memory usage percentage
- Disk usage percentage
- Timestamp
Log Location: /var/log/anuzoo/metrics.log
Format:
2025-12-14 18:30:00,CPU:45%,MEM:62%,DISK:38%
2025-12-14 18:35:00,CPU:48%,MEM:64%,DISK:38%
4. Daily Reports
Frequency: Daily at 6 AM
Report Includes:
- Service status
- System resources
- Container stats
- Disk usage by directory
- Recent logs (API, Database, Redis)
- NGINX access and error logs
Location: /var/log/anuzoo/health_report_YYYYMMDD_HHMMSS.txt
5. Database Backups
Frequency: Every 6 hours (00:00, 06:00, 12:00, 18:00)
Backup Location: /var/backups/anuzoo/
Retention: 7 days (older backups automatically deleted)
Manual Backup:
./scripts/deploy.sh backup
---
Commands
Quick Status Dashboard
anuzoo-status
Shows:
- Running services
- CPU, memory, disk usage
- Recent API requests
- Recent errors
- System uptime
Run Health Check
./scripts/monitor.sh check
Full health check with detailed output.
Generate Detailed Report
./scripts/monitor.sh report
Creates comprehensive system report in /var/log/anuzoo/.
View Container Stats
./scripts/monitor.sh stats
Real-time container resource usage.
Log Current Metrics
./scripts/monitor.sh metrics
Manually log current system metrics.
Run Alert Check
ALERT_EMAIL=your@email.com ./scripts/monitor.sh alert
Check for issues and send email if problems detected.
---
Metrics Explained
CPU Usage
What it measures: Percentage of CPU capacity being used
Healthy Range: 0-70% Warning Range: 71-80% Critical Range: 81-100%
High CPU Causes:
- High API traffic
- AI model processing (image detection)
- Database queries
- Background jobs
Actions:
- Check container stats to identify which service
- Review API logs for heavy endpoints
- Consider upgrading to High Frequency plan
- Optimize database queries
Memory Usage
What it measures: Percentage of RAM being used
Healthy Range: 0-75% Warning Range: 76-85% Critical Range: 86-100%
High Memory Causes:
- AI models loaded in memory (PyTorch + ViT ~4GB)
- Database connections
- Redis cache growth
- Memory leaks
Actions:
Check which service is using memory
docker stats --no-streamCheck Redis memory
docker compose -f docker-compose.production.yml exec redis redis-cli INFO memoryCheck database connections
docker compose -f docker-compose.production.yml exec db psql -U anuzoo_user -d anuzoo_prod -c "SELECT count(*) FROM pg_stat_activity;"Restart services if needed
./scripts/deploy.sh restart
Disk Usage
What it measures: Percentage of storage being used
Healthy Range: 0-75% Warning Range: 76-85% Critical Range: 86-100%
High Disk Causes:
- User uploaded images
- Database growth
- Log file accumulation
- Docker images
Actions:
Check disk usage by directory
du -sh /var/lib/anuzoo/*Clean Docker system
docker system prune -a --volumesReview log file sizes
du -sh /var/log/*Clean old backups manually
find /var/backups/anuzoo -name "*.sql.gz" -mtime +7 -delete
API Response Time
What it measures: How long API takes to respond
Healthy Range: < 200ms Warning Range: 200-500ms Critical Range: > 500ms
Check Response Time:
Test API endpoint
time curl -s http://localhost:8001/healthCheck NGINX logs for slow requests
tail -f /var/log/nginx/api_access.log
Slow API Causes:
- Database query performance
- AI model inference time
- High concurrent requests
- Network latency
---
Alert Thresholds
Default Thresholds
These are configured in scripts/monitor.sh:
ALERT_THRESHOLD_CPU=75 # Alert if CPU > 75% (optimized for 6 vCPU)
ALERT_THRESHOLD_MEM=80 # Alert if memory > 80%
ALERT_THRESHOLD_DISK=85 # Alert if disk > 85%
Customize Thresholds
Edit the monitoring script:
nano scripts/monitor.shFind and modify:
ALERT_THRESHOLD_CPU=90 # Raise CPU threshold
ALERT_THRESHOLD_MEM=90 # Raise memory threshold
ALERT_THRESHOLD_DISK=80 # Lower disk threshold
Configure Alert Email
Add to .env.production:
ALERT_EMAIL=your-email@example.com
Or set in cron:
sudo nano /etc/cron.d/anuzoo-monitoringModify the alert line:
/15 * root cd /root/anuzoo && ALERT_EMAIL=your@email.com ./scripts/monitor.sh alert
---
Log Files
Application Logs
| Log File | Location | Retention | Description |
|----------|----------|-----------|-------------|
| Health Checks | /var/log/anuzoo/health_check.log | 7 days | Hourly health check results |
| Alerts | /var/log/anuzoo/alerts.log | 7 days | Alert trigger history |
| Metrics | /var/log/anuzoo/metrics.log | 30 days | CPU/memory/disk metrics |
| Daily Reports | /var/log/anuzoo/health_report_*.txt | 30 days | Daily system reports |
| Backups | /var/log/anuzoo/backups.log | 7 days | Database backup history |
| Monitor | /var/log/anuzoo/monitor.log | 7 days | Monitoring script output |
NGINX Logs
| Log File | Location | Retention | Description |
|----------|----------|-----------|-------------|
| API Access | /var/log/nginx/api_access.log | 14 days | API request logs |
| API Errors | /var/log/nginx/api_error.log | 14 days | API error logs |
| Web Access | /var/log/nginx/web_access.log | 14 days | Website access logs |
| Web Errors | /var/log/nginx/web_error.log | 14 days | Website error logs |
Docker Container Logs
View with Docker Compose:
All services
docker compose -f docker-compose.production.yml logs -fSpecific service
docker compose -f docker-compose.production.yml logs -f apiLast 100 lines
docker compose -f docker-compose.production.yml logs --tail=100 api
View Logs
Tail health check log
tail -f /var/log/anuzoo/health_check.logView alerts
cat /var/log/anuzoo/alerts.logView metrics (last 100 entries)
tail -n 100 /var/log/anuzoo/metrics.logView latest daily report
ls -lt /var/log/anuzoo/health_report_*.txt | head -n 1 | xargs catNGINX access log (live)
tail -f /var/log/nginx/api_access.logNGINX error log
tail -f /var/log/nginx/api_error.log
---
Troubleshooting
Email Alerts Not Working
Check mail service:
Test mail command
echo "Test" | mail -s "Test" your@email.comCheck mail logs
tail -f /var/log/mail.log
Install mail service (if missing):
sudo apt-get install -y mailutils postfixChoose "Internet Site" during postfix setup
Configure SendGrid (recommended for production):
Install SendGrid support
sudo apt-get install -y libsasl2-modulesConfigure postfix for SendGrid
sudo nano /etc/postfix/main.cfAdd:
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_sasl_security_options = noanonymous
smtp_tls_security_level = encrypt
header_size_limit = 4096000
relayhost = [smtp.sendgrid.net]:587Create password file
sudo nano /etc/postfix/sasl_passwd
Add:
[smtp.sendgrid.net]:587 apikey:YOUR_SENDGRID_API_KEYHash password file
sudo postmap /etc/postfix/sasl_passwd
sudo chmod 600 /etc/postfix/sasl_passwdRestart postfix
sudo systemctl restart postfix
Cron Jobs Not Running
Check cron status:
sudo systemctl status cron
View cron logs:
grep CRON /var/log/syslog | tail -n 50
Manually test cron command:
cd /root/anuzoo && ./scripts/monitor.sh alert
Verify cron file:
cat /etc/cron.d/anuzoo-monitoring
High Resource Usage
Identify resource hog:
Container stats
docker stats --no-streamProcess list
top -bn1 | head -n 20Disk usage
du -sh /var/lib/anuzoo/*
Restart specific service:
docker compose -f docker-compose.production.yml restart api
docker compose -f docker-compose.production.yml restart db
docker compose -f docker-compose.production.yml restart redis
Service Won't Start
Check logs:
docker compose -f docker-compose.production.yml logs api
Check environment variables:
cat .env.production | grep DATABASE_URL
Rebuild container:
docker compose -f docker-compose.production.yml build api
docker compose -f docker-compose.production.yml up -d api
---
Best Practices
1. Regular Monitoring
- Check
anuzoo-statusdaily - Review daily reports weekly
- Monitor metrics trends monthly
2. Alert Management
- Configure email alerts to team distribution list
- Set up SMS alerts for critical issues (use service like PagerDuty)
- Escalate if multiple alerts in short period
3. Performance Optimization
- Review slow API endpoints monthly
- Optimize database queries based on logs
- Clean up old data periodically
- Monitor bandwidth usage
4. Backup Verification
- Test database restore monthly
- Verify backup files aren't corrupted
- Keep backups off-server (download to local storage)
5. Security Monitoring
- Review NGINX error logs for attack attempts
- Monitor failed login attempts
- Update SSL certificates before expiration
- Audit system access logs
6. Capacity Planning
- Track metrics trends over time
- Plan server upgrades before hitting thresholds
- Monitor user growth vs resource usage
- Budget for scaling
7. Incident Response
Create runbook for common issues:
API Not Responding: 1. Check service status 2. Review logs for errors 3. Restart API service 4. Check database connectivity 5. Escalate if issue persists
High CPU Usage: 1. Identify which service 2. Check for high traffic 3. Review recent deployments 4. Scale resources if needed
Out of Disk Space: 1. Clean Docker system 2. Remove old logs 3. Clean old backups 4. Resize disk if needed
---
Advanced Monitoring (Optional)
Grafana + Prometheus
For visual dashboards and advanced metrics:
Install Prometheus
docker run -d -p 9090:9090 prom/prometheusInstall Grafana
docker run -d -p 3000:3000 grafana/grafanaConfigure Prometheus to scrape metrics
Configure Grafana dashboards
Sentry Error Tracking
For application error monitoring:
1. Sign up at https://sentry.io
2. Get DSN
3. Add to .env.production:
SENTRY_DSN=https://your-dsn@sentry.io/project-id
4. Integrate in FastAPI appUptime Monitoring
External monitoring services:
- UptimeRobot (free): https://uptimerobot.com
- Pingdom (paid): https://www.pingdom.com
- StatusCake (free tier): https://www.statuscake.com
Configure to check:
- https://api.yourdomain.com/health
- https://yourdomain.com
---
Support
If you encounter issues with monitoring:
1. Check logs: /var/log/anuzoo/
2. Review cron configuration: /etc/cron.d/anuzoo-monitoring
3. Test monitoring scripts manually
4. Verify email configuration
For urgent issues, consider professional monitoring services or DevOps support.
---
Summary
Your monitoring system provides:
✅ Automated health checks every 5 minutes ✅ Email alerts for critical issues ✅ Performance metrics tracking ✅ Daily comprehensive reports ✅ Automated database backups ✅ Log rotation and retention ✅ Quick status dashboard
Stay proactive! Regular monitoring prevents small issues from becoming major outages.
Anuzoo Streaming Infrastructure Setup
Status: Ready to deploy
Location: apps/streaming/
Last Updated: December 10, 2025
---
Overview
The streaming infrastructure consists of 8 services:
| Service | Purpose | Port | Status | |---------|---------|------|--------| | LiveKit | WebRTC streaming server | 7880-7882 | ✅ Ready | | MediaMTX | RTMP ingest (OBS) | 1935, 8554, 8888 | ✅ Ready | | Redis | Stream state & pub/sub | 6380 | ✅ Ready | | MinIO | Video storage (S3-compatible) | 9002-9003 | ✅ Ready | | Transcoder | FFmpeg video processing | - | ⚠️ Needs build | | Thumbnail | OpenAI Vision thumbnails | - | ⚠️ Needs API key | | Captions | Whisper captions | - | ⚠️ Needs API key | | Moderation | Claude Vision moderation | - | ⚠️ Needs API key |
---
Quick Start (Core Services Only)
Option 1: Start Core Services (Recommended for testing)
Start just LiveKit, MediaMTX, Redis, and MinIO without the processing workers:
cd apps/streamingStart core services
docker-compose up -d livekit mediamtx redis minioVerify services started
docker-compose psView logs
docker-compose logs -f livekit mediamtx
Access Points:
- LiveKit API: http://localhost:7880
- MediaMTX RTMP: rtmp://localhost:1935
- MediaMTX HLS: http://localhost:8888
- MinIO Console: http://localhost:9003 (Login: anuzoo/development)
- Redis: localhost:6380
Option 2: Start All Services (Requires API keys)
If you have OpenAI and Anthropic API keys:
cd apps/streamingCreate .env file from example
cp .env.example .envEdit .env and add your API keys
nano .env # or use your editor
Start all services
docker-compose up -dCheck status
docker-compose ps
---
Environment Variables
Required for Processing Workers:
Create apps/streaming/.env:
OpenAI API (for thumbnails & captions)
OPENAI_API_KEY=sk-...Anthropic API (for content moderation)
ANTHROPIC_API_KEY=sk-ant-...
Optional Configuration:
LiveKit Custom Keys (optional, uses devkey by default)
LIVEKIT_API_KEY=your-custom-key
LIVEKIT_API_SECRET=your-custom-secret
---
Service Details
1. LiveKit (WebRTC Streaming)
Purpose: Real-time video/audio streaming using WebRTC Ports:
- 7880: HTTP API
- 7881: WebRTC TCP
- 7882: WebRTC UDP
Config: apps/streaming/livekit/config.yaml
Default Credentials:
- API Key:
devkey - API Secret:
anuzoo-dev-secret-key-minimum-32-chars-required-for-security
⚠️ Important Configuration Notes:
- LiveKit requires secrets to be 32+ characters for security
- TURN is disabled for local development (not needed for LAN testing)
- For production, generate secure random keys and enable TURN with proper domain
Test Connectivity:
curl http://localhost:7880
Use Cases:
- Creator live streams
- 1-on-1 video calls
- Group video chats
- Real-time pet playdates
---
2. MediaMTX (RTMP Ingest)
Purpose: Accept RTMP streams from OBS/Streamlabs Ports:
- 1935: RTMP ingest
- 8554: RTSP
- 8888: HLS playback
- 8889: WebRTC
Config: apps/streaming/mediamtx/mediamtx.yml
Test with OBS:
1. Open OBS Studio
2. Settings → Stream
3. Server: rtmp://localhost:1935/live
4. Stream Key: test123
5. Start Streaming
View Stream:
- HLS: http://localhost:8888/live/test123/index.m3u8
- RTSP: rtsp://localhost:8554/live/test123
---
3. Redis (Stream State)
Purpose: Real-time state management and pub/sub Port: 6380 Image: redis:7-alpine
Connect:
redis-cli -p 6380
Use Cases:
- Active stream tracking
- Viewer count sync
- Chat message queue
- Presence indicators
---
4. MinIO (Video Storage)
Purpose: S3-compatible storage for VOD (Video on Demand) Ports:
- 9002: API
- 9003: Web Console
Credentials:
- Username:
anuzoo - Password:
development
Access Console: http://localhost:9003
Create Bucket:
docker exec anuzoo-minio mc mb local/anuzoo-streams
Use Cases:
- Stream recordings
- Thumbnail storage
- Processed video files
- Backup archives
---
5. Transcoder Worker (FFmpeg)
Purpose: Video transcoding and format conversion
Location: apps/streaming/processing/transcoder/
Status: ⚠️ Requires build
Build:
cd apps/streaming
docker-compose build transcoder
docker-compose up -d transcoder
Features:
- Transcode to multiple resolutions (1080p, 720p, 480p)
- Convert formats (MP4, WebM, HLS)
- Generate preview clips
- Optimize for mobile
---
6. Thumbnail Generator (OpenAI Vision)
Purpose: AI-generated thumbnails for streams
Location: apps/streaming/processing/thumbnail/
Status: ⚠️ Requires OPENAI_API_KEY
Features:
- Extract key frames from video
- AI analysis of best thumbnail
- Generate custom thumbnails
- Add overlay text
Build & Run:
Set API key in .env first
docker-compose build thumbnail
docker-compose up -d thumbnail
---
7. Caption Generator (Whisper API)
Purpose: Automatic speech-to-text captions
Location: apps/streaming/processing/captions/
Status: ⚠️ Requires OPENAI_API_KEY
Features:
- Real-time transcription
- Multi-language support
- SRT subtitle generation
- Searchable transcripts
Build & Run:
Set API key in .env first
docker-compose build captions
docker-compose up -d captions
---
8. Moderation Worker (Claude Vision)
Purpose: AI-powered content moderation
Location: apps/streaming/processing/moderation/
Status: ⚠️ Requires ANTHROPIC_API_KEY
Features:
- Real-time frame analysis
- Flag inappropriate content
- Multi-level moderation (strict, standard, relaxed)
- Automatic stream termination for violations
Build & Run:
Set API key in .env first
docker-compose build moderation
docker-compose up -d moderation
---
Testing the Streaming Stack
Test 1: Start a Live Stream (OBS)
1. Start core services
cd apps/streaming
docker-compose up -d livekit mediamtx2. Configure OBS
Server: rtmp://localhost:1935/live
Stream Key: mystream3. Start streaming in OBS
4. View stream
Open VLC or browser: http://localhost:8888/live/mystream/index.m3u8
Test 2: WebRTC Stream (LiveKit)
Use LiveKit React SDK in frontend
See: src/components/StreamPublisher.tsx
See: src/components/StreamViewer.tsx
Test 3: Check Service Health
cd apps/streamingCheck all running services
docker-compose psView logs for specific service
docker-compose logs -f livekit
docker-compose logs -f mediamtxCheck resource usage
docker stats
---
Integration with Anuzoo API
The streaming backend is connected via apps/api/app/routers/streams.py:
Endpoints:
| Endpoint | Method | Purpose |
|----------|--------|---------|
| /streams/create | POST | Create new livestream |
| /streams/{id} | GET | Get stream details |
| /streams/{id}/join | POST | Join as viewer |
| /streams/{id}/end | POST | End stream |
| /streams/token | POST | Get LiveKit access token |
Frontend Components:
src/components/StreamPublisher.tsx- Creator stream UIsrc/components/StreamViewer.tsx- Viewer UIsrc/pages/Streams.tsx- Streams list pagesrc/pages/LiveStreamView.tsx- Full screen viewer
---
Port Reference
| Port | Service | Protocol | Purpose | |------|---------|----------|---------| | 1935 | MediaMTX | RTMP | OBS ingest | | 6380 | Redis | TCP | Stream state | | 7880 | LiveKit | HTTP | API | | 7881 | LiveKit | TCP | WebRTC | | 7882 | LiveKit | UDP | WebRTC | | 8554 | MediaMTX | RTSP | Streaming | | 8888 | MediaMTX | HTTP | HLS playback | | 8889 | MediaMTX | WebRTC | Browser streaming | | 9002 | MinIO | HTTP | S3 API | | 9003 | MinIO | HTTP | Web console |
---
Troubleshooting
Services Won't Start
Check logs
docker-compose logsCheck port conflicts
netstat -ano | findstr "7880"
netstat -ano | findstr "1935"Restart all services
docker-compose restart
Can't Connect from OBS
1. Verify MediaMTX is running: docker ps | grep mediamtx
2. Check firewall allows port 1935
3. Test with: telnet localhost 1935
4. Check MediaMTX logs: docker-compose logs mediamtx
Processing Workers Failing
1. Verify API keys in .env file
2. Check worker logs: docker-compose logs transcoder thumbnail captions moderation
3. Rebuild images: docker-compose build --no-cache
4. Ensure shared volume exists: ls -la apps/streaming/processing/shared
High CPU Usage
Transcoding is CPU-intensive. To reduce load:
In docker-compose.yml, add to transcoder service:
deploy:
resources:
limits:
cpus: '2.0'
memory: 4G
---
Stopping Services
cd apps/streamingStop all services
docker-compose downStop and remove volumes
docker-compose down -vStop specific service
docker-compose stop livekit
---
Next Steps
1. Start Core Services: docker-compose up -d livekit mediamtx redis minio
2. Add API Keys: Create .env with OpenAI and Anthropic keys
3. Build Processing Workers: docker-compose build
4. Start All Services: docker-compose up -d
5. Test with OBS: Stream to rtmp://localhost:1935/live
6. Integrate Frontend: Use StreamPublisher/StreamViewer components
---
Cost Estimates
Free (Core Services Only):
- LiveKit: Free for development
- MediaMTX: Open source
- Redis: Free
- MinIO: Free
With Processing Workers:
- OpenAI API: ~$0.002/minute for captions + thumbnails
- Anthropic API: ~$0.001/minute for moderation
- Estimated: $3-5 per hour of processed streaming
Production (GKE):
- GKE Autopilot: $900-1,500/month
- Cloud SQL (Redis): $250/month
- Cloud Storage: $85/month
- Total: $1,235-1,835/month for 1M users
---
Ready to stream! 🎥🐕
Anuzoo Vultr Deployment Checklist
Complete step-by-step checklist for deploying Anuzoo to Vultr production infrastructure.
---
Pre-Deployment Phase
☐ Account Setup
- [ ] Create Vultr account at https://vultr.com
- [ ] Verify email address
- [ ] Add payment method (credit card or PayPal)
- [ ] Apply promotional credits if available
- [ ] Review billing settings
Reference: VULTR_ACCOUNT_SETUP.md → Part 1
---
☐ Gather API Keys
Stripe (Required)
- [ ] Sign up at https://dashboard.stripe.com
- [ ] Get Publishable Key (pk_live_...)
- [ ] Get Secret Key (sk_live_...)
- [ ] Create webhook endpoint
- [ ] Get Webhook Secret (whsec_...)
Pinecone (Required for AI Matching)
- [ ] Sign up at https://app.pinecone.io
- [ ] Get API Key
- [ ] Note Environment (e.g., us-west1-gcp)
- [ ] Create index:
anuzoo-embeddings(512 dimensions, cosine)
OpenAI (Required for Embeddings)
- [ ] Sign up at https://platform.openai.com
- [ ] Create API Key (sk-...)
- [ ] Note key securely (shown only once)
SendGrid (Required for Email)
- [ ] Sign up at https://app.sendgrid.com
- [ ] Create API Key (SG....)
- [ ] Set up sender authentication
- [ ] Verify domain in SendGrid
Security Secrets
- [ ] Generate SECRET_KEY:
openssl rand -hex 32 - [ ] Generate JWT_SECRET_KEY:
openssl rand -hex 32 - [ ] Generate SESSION_SECRET_KEY:
openssl rand -hex 32
Reference: VULTR_ACCOUNT_SETUP.md → Part 6
---
☐ Domain Configuration
- [ ] Purchase domain (if not already owned)
- [ ] Access domain registrar DNS settings
- [ ] Note domain registrar credentials
- [ ] Prepare to add A records (will do after server creation)
Reference: VULTR_ACCOUNT_SETUP.md → Part 4
---
☐ Local Preparation
- [ ] Clone repository to local machine
- [ ] Review production configuration files:
docker-compose.production.yml.env.production.examplenginx.confscripts/deploy.sh- [ ] Generate SSH key pair (if needed)
---
Server Provisioning Phase
☐ Create Vultr Server
- [ ] Log into Vultr dashboard
- [ ] Click Deploy + → Deploy New Server
- [ ] Select Cloud Compute
- [ ] Choose location: Ashburn, VA (IAD) or nearest to users
- [ ] Select OS: Ubuntu 22.04 LTS x64
- [ ] Select plan: High Performance AMD, 6 vCPU, 16GB RAM ($96/mo)
- [ ] Enable IPv6 (free)
- [ ] Enable Auto Backups (+$9.60/mo)
- [ ] Enable DDOS Protection (free)
- [ ] Set hostname:
anuzoo-prod-01 - [ ] Set label:
Anuzoo Production - [ ] Add SSH key (if available)
- [ ] Click Deploy Now
- [ ] Wait 60-90 seconds for provisioning
- [ ] Note server IP address: ________________
Reference: VULTR_ACCOUNT_SETUP.md → Part 2
---
☐ Configure DNS
Add these DNS records at your domain registrar:
| Type | Name | Value | TTL | |------|------|-------|-----| | A | @ | YOUR_SERVER_IP | 600 | | A | www | YOUR_SERVER_IP | 600 | | A | api | YOUR_SERVER_IP | 600 |
- [ ] Add A record for root domain
- [ ] Add A record for www subdomain
- [ ] Add A record for api subdomain
- [ ] Wait 5-30 minutes for DNS propagation
- [ ] Test DNS:
nslookup yourdomain.com
Reference: VULTR_ACCOUNT_SETUP.md → Part 4
---
☐ Configure Firewall
Option A: Vultr Firewall (Recommended)
- [ ] Go to Network → Firewall
- [ ] Create firewall group:
Anuzoo Production - [ ] Add inbound rule: TCP port 22 (SSH)
- [ ] Add inbound rule: TCP port 80 (HTTP)
- [ ] Add inbound rule: TCP port 443 (HTTPS)
- [ ] Add inbound rule: ICMP (ping)
- [ ] Add outbound rule: All traffic
- [ ] Attach firewall to server
Reference: VULTR_ACCOUNT_SETUP.md → Part 5
---
☐ SSH Access
- [ ] Get root password from Vultr dashboard
- [ ] Test SSH connection:
ssh root@YOUR_SERVER_IP - [ ] Add SSH key to server (if not done during provisioning)
- [ ] Test SSH key login
- [ ] Disable password auth (optional, recommended)
Reference: VULTR_ACCOUNT_SETUP.md → Part 3
---
Initial Server Setup Phase
☐ System Updates
SSH into server
ssh root@YOUR_SERVER_IPUpdate system
apt-get update && apt-get upgrade -yInstall basic tools
apt-get install -y curl wget git vim htop
- [ ] Run system updates
- [ ] Install basic tools
- [ ] Reboot if kernel updated
---
☐ Install Docker
Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.shInstall Docker Compose
curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-composeVerify installation
docker --version
docker compose version
- [ ] Install Docker
- [ ] Install Docker Compose
- [ ] Verify installations
Reference: VULTR_MIGRATION_PLAN.md → Phase 1, Step 3
---
☐ Clone Repository
Clone repository
cd /root
git clone https://github.com/yourusername/anuzoo.git
cd anuzooOr if using private repo:
git clone https://YOUR_TOKEN@github.com/yourusername/anuzoo.git
- [ ] Clone repository to
/root/anuzoo - [ ] Verify all files present
- [ ] Check current branch
---
☐ Configure Environment
Copy environment template
cp .env.production.example .env.productionEdit environment file
nano .env.production
Fill in all values from API keys gathered earlier:
- [ ] Database credentials (POSTGRES_USER, POSTGRES_PASSWORD, POSTGRES_DB)
- [ ] Redis URL
- [ ] SECRET_KEY (generated earlier)
- [ ] JWT_SECRET_KEY (generated earlier)
- [ ] SESSION_SECRET_KEY (generated earlier)
- [ ] STRIPE_SECRET_KEY
- [ ] STRIPE_PUBLISHABLE_KEY
- [ ] STRIPE_WEBHOOK_SECRET
- [ ] PINECONE_API_KEY
- [ ] PINECONE_ENVIRONMENT
- [ ] PINECONE_INDEX_NAME
- [ ] OPENAI_API_KEY
- [ ] SMTP_PASSWORD (SendGrid API key)
- [ ] SMTP_FROM_EMAIL
- [ ] FRONTEND_URL (https://yourdomain.com)
- [ ] ALLOWED_ORIGINS (https://yourdomain.com,https://www.yourdomain.com)
Reference: .env.production.example
---
☐ Create Data Directories
Create required directories
mkdir -p /var/lib/anuzoo/{postgres,redis,models,uploads}
mkdir -p /var/log/anuzoo
mkdir -p /var/backups/anuzooSet ownership
chown -R 1000:1000 /var/lib/anuzoo
- [ ] Create data directories
- [ ] Create log directories
- [ ] Create backup directories
- [ ] Set permissions
Reference: scripts/deploy.sh → create_directories()
---
Application Deployment Phase
☐ Deploy Application
Make deploy script executable
chmod +x scripts/deploy.shStart services
./scripts/deploy.sh start
- [ ] Make deploy script executable
- [ ] Run deployment
- [ ] Wait for services to start (~60 seconds)
- [ ] Check service status:
docker compose -f docker-compose.production.yml ps
Expected Output: All services should show "Up" and "healthy"
Reference: VULTR_MIGRATION_PLAN.md → Phase 1, Step 5
---
☐ Verify Application
Test API health endpoint
curl http://localhost:8001/healthCheck database
docker compose -f docker-compose.production.yml exec db pg_isready -U anuzoo_userCheck Redis
docker compose -f docker-compose.production.yml exec redis redis-cli pingView logs
docker compose -f docker-compose.production.yml logs -f
- [ ] API health check returns success
- [ ] Database is ready
- [ ] Redis responds to ping
- [ ] No errors in logs
Reference: scripts/deploy.sh → health()
---
Web Server Setup Phase
☐ Install NGINX
Make install script executable
chmod +x scripts/install-nginx.shRun installation (replace with your domain and email)
./scripts/install-nginx.sh yourdomain.com your@email.com
- [ ] Make install script executable
- [ ] Run NGINX installation
- [ ] Wait for SSL certificate installation
- [ ] Verify NGINX is running:
systemctl status nginx
Reference: scripts/install-nginx.sh
---
☐ Build and Deploy Frontend
On your local machine:
cd /path/to/anuzoo
npm install
npm run buildCopy dist folder to server:
scp -r dist root@YOUR_SERVER_IP:/var/www/anuzoo/Or build on server:
ssh root@YOUR_SERVER_IP
cd /root/anuzoo
curl -fsSL https://deb.nodesource.com/setup_20.x | bash -
apt-get install -y nodejs
npm install
npm run build
cp -r dist /var/www/anuzoo/
- [ ] Build frontend (locally or on server)
- [ ] Copy to
/var/www/anuzoo/dist - [ ] Set permissions:
chown -R www-data:www-data /var/www/anuzoo
Reference: VULTR_MIGRATION_PLAN.md → Phase 1, Step 7
---
☐ Verify Web Access
Test all URLs:
- [ ] https://yourdomain.com (should load frontend)
- [ ] https://www.yourdomain.com (should redirect to main)
- [ ] https://api.yourdomain.com/health (should return API health)
- [ ] https://api.yourdomain.com/docs (should show API documentation)
- [ ] SSL certificate valid (check browser)
- [ ] No mixed content warnings
- [ ] CORS working (test API calls from frontend)
---
Monitoring Setup Phase
☐ Install Monitoring
Make install script executable
chmod +x scripts/install-monitoring.shRun installation (replace with your email)
./scripts/install-monitoring.sh your@email.com
- [ ] Make install script executable
- [ ] Run monitoring installation
- [ ] Verify cron jobs:
cat /etc/cron.d/anuzoo-monitoring - [ ] Test monitoring:
./scripts/monitor.sh check - [ ] Test status dashboard:
anuzoo-status
Reference: scripts/install-monitoring.sh
---
☐ Test Monitoring
- [ ] Check health:
./scripts/monitor.sh check - [ ] Generate report:
./scripts/monitor.sh report - [ ] View metrics:
tail -f /var/log/anuzoo/metrics.log - [ ] Test email alert:
echo "Test" | mail -s "Test" your@email.com
Reference: MONITORING_GUIDE.md
---
Final Verification Phase
☐ End-to-End Testing
User Registration & Login
- [ ] Register new user account
- [ ] Verify email sent
- [ ] Log in successfully
- [ ] JWT token working
Core Features
- [ ] Create user profile
- [ ] Upload profile picture
- [ ] Create a post
- [ ] Like a post
- [ ] Comment on a post
- [ ] Send a message
- [ ] Create an event
- [ ] RSVP to event
Payment Flow (Test Mode)
- [ ] Switch Stripe to test mode in .env.production
- [ ] Test ticket purchase
- [ ] Test subscription
- [ ] Test creator tipping
- [ ] Verify Stripe webhook
AI Features
- [ ] Test image upload
- [ ] Verify AI detection working
- [ ] Test pet matching
- [ ] Check vector embeddings
Reference: Test credentials in LOGIN_CREDENTIALS.md
---
☐ Performance Testing
Install Apache Bench (if not installed)
apt-get install -y apache2-utilsTest API performance (100 requests, 10 concurrent)
ab -n 100 -c 10 https://api.yourdomain.com/healthMonitor during test
./scripts/monitor.sh stats
- [ ] API response time < 200ms
- [ ] No errors during load test
- [ ] CPU usage acceptable under load
- [ ] Memory usage stable
---
☐ Security Audit
- [ ] HTTPS enforced (HTTP redirects to HTTPS)
- [ ] SSL certificate valid and trusted
- [ ] Security headers present (check with https://securityheaders.com)
- [ ] CORS properly configured
- [ ] Rate limiting working
- [ ] Firewall rules correct
- [ ] No exposed secrets in logs
- [ ] Database not publicly accessible
- [ ] Redis not publicly accessible
---
☐ Backup Testing
Create manual backup
./scripts/deploy.sh backupList backups
ls -lh /var/backups/anuzoo/Test restore (on test data only!)
./scripts/deploy.sh restore /var/backups/anuzoo/db_YYYYMMDD_HHMMSS.sql.gz
- [ ] Manual backup successful
- [ ] Backup file created
- [ ] Backup file not empty
- [ ] Test restore successful
Reference: scripts/deploy.sh → backup() and restore()
---
Post-Deployment Phase
☐ Documentation
- [ ] Update README.md with production URLs
- [ ] Document any environment-specific changes
- [ ] Save server IP and credentials securely
- [ ] Document DNS configuration
- [ ] Create incident response plan
---
☐ Team Handoff
- [ ] Share production URLs with team
- [ ] Provide SSH access to authorized team members
- [ ] Share monitoring dashboard access
- [ ] Document escalation procedures
- [ ] Schedule team training on deployment scripts
---
☐ Monitoring Setup
- [ ] Set up external uptime monitoring (UptimeRobot, Pingdom)
- [ ] Configure Sentry for error tracking (optional)
- [ ] Set up performance monitoring (optional)
- [ ] Configure log aggregation (optional)
- [ ] Enable real-time alerts (email, SMS, Slack)
---
☐ Ongoing Maintenance
Daily:
- [ ] Check
anuzoo-status - [ ] Review error logs
- [ ] Monitor user growth
Weekly:
- [ ] Review daily health reports
- [ ] Check backup integrity
- [ ] Review performance metrics
- [ ] Update dependencies (if needed)
Monthly:
- [ ] Review resource usage trends
- [ ] Test disaster recovery
- [ ] Security updates
- [ ] Cost review
- [ ] Capacity planning
Reference: MONITORING_GUIDE.md → Best Practices
---
Rollback Plan
If deployment fails:
Stop services
./scripts/deploy.sh stopRestore from backup
./scripts/deploy.sh restore /var/backups/anuzoo/db_LATEST.sql.gzCheck logs
docker compose -f docker-compose.production.yml logsRestart services
./scripts/deploy.sh start
- [ ] Document rollback procedure
- [ ] Test rollback on staging (if available)
- [ ] Keep previous version accessible
---
Success Criteria
Deployment is successful when:
- ✅ All services running and healthy
- ✅ Frontend accessible via HTTPS
- ✅ API responding to requests
- ✅ Database accepting connections
- ✅ Redis caching working
- ✅ SSL certificates valid
- ✅ Monitoring and alerts configured
- ✅ Backups running automatically
- ✅ No critical errors in logs
- ✅ End-to-end features working
- ✅ Performance within acceptable ranges
- ✅ Security audit passed
---
Quick Reference
Important Files
| File | Purpose |
|------|---------|
| .env.production | Environment variables (SECRETS!) |
| docker-compose.production.yml | Service orchestration |
| nginx.conf | Web server config |
| scripts/deploy.sh | Deployment automation |
| scripts/monitor.sh | Monitoring scripts |
Important Commands
| Command | Purpose |
|---------|---------|
| ./scripts/deploy.sh start | Start all services |
| ./scripts/deploy.sh stop | Stop all services |
| ./scripts/deploy.sh restart | Restart all services |
| ./scripts/deploy.sh status | Check service status |
| ./scripts/deploy.sh logs | View logs |
| ./scripts/deploy.sh backup | Backup database |
| ./scripts/deploy.sh health | Run health checks |
| ./scripts/monitor.sh check | Run monitoring check |
| anuzoo-status | Quick status dashboard |
| systemctl restart nginx | Restart NGINX |
Important Directories
| Directory | Contents |
|-----------|----------|
| /var/lib/anuzoo/postgres | Database data |
| /var/lib/anuzoo/redis | Redis data |
| /var/lib/anuzoo/models | AI models |
| /var/lib/anuzoo/uploads | User uploads |
| /var/log/anuzoo | Application logs |
| /var/backups/anuzoo | Database backups |
| /var/www/anuzoo/dist | Frontend files |
Support Resources
- Vultr Support: https://my.vultr.com/support/
- Vultr Docs: https://docs.vultr.com
- Project Docs: See README.md
- Monitoring Guide: MONITORING_GUIDE.md
- Migration Plan: VULTR_MIGRATION_PLAN.md
---
Completion
Once all checkboxes are complete:
🎉 Congratulations! Anuzoo is live on Vultr!
Share the production URLs:
- Website: https://yourdomain.com
- API: https://api.yourdomain.com
- API Docs: https://api.yourdomain.com/docs
---
Deployment Date: ________________ Deployed By: ________________ Server IP: ________________ Notes: ________________