Architecture Diagram
Components
1. Terraform
Purpose: Infrastructure as Code for Digital Ocean provisioning Responsibilities:- Create Digital Ocean droplet
- Configure firewall rules
- Allocate reserved IP
- Set up SSH access
terraform/main.tf: Resource definitionsterraform/variables.tf: Input variablesterraform/outputs.tf: Output values
2. Ansible
Purpose: Server configuration and service deployment Responsibilities:- Install Docker and Docker Compose
- Install and configure nginx
- Deploy orchestrator service
- Set up systemd services
- Configure log rotation
- docker: Docker installation and configuration
- nginx: Nginx installation and preview config structure
- orchestrator: Orchestrator service deployment
3. Orchestrator Service
Purpose: Core service handling webhooks and managing deployments Responsibilities:- Receive and verify GitHub webhooks
- Clone repositories
- Detect framework (NestJS, Go, Laravel, Rust, Python)
- Generate Docker Compose files
- Build and start containers
- Configure nginx routing
- Post comments to GitHub PRs
- Cleanup stale deployments
- webhook-handler.ts: Webhook processing and routing
- docker-manager.ts: Docker operations
- nginx-manager.ts: Nginx configuration management
- github-client.ts: GitHub API interactions
- cleanup-service.ts: Scheduled cleanup tasks
- deployment-tracker.ts: Deployment state management
4. CLI Tool
Purpose: User-facing interface for setup and management Responsibilities:- Initialize configuration
- Run Terraform and Ansible
- Create GitHub webhooks
- Sync orchestrator code to server (after setup)
- Check deployment status
- Destroy infrastructure
init: Create configuration filesetup: Deploy infrastructuresync: Sync orchestrator code to server (build locally, rsync, restart). Use after setup when you change orchestrator code.status: Check system statusdestroy: Teardown infrastructure
Data Flow
Deployment Flow
Update Flow
Cleanup Flow
Port Allocation Strategy
- Global pool: App ports start at 8000, DB ports at 9000. Each new deployment gets the next free app port and next free db port (keyed by deployment id).
- Port allocations are tracked in the deployment store’s
portAllocationsmap (keyed by deployment id) and are released on cleanup so ports are reused correctly. Allocation excludes host ports currently in use by running Docker containers, so failed deployments whose containers still run do not cause port collisions. - Allows many deployments across multiple repos without collision.
Routing Strategy
Path-based routing:/{PROJECT_SLUG}/pr-{PR_NUMBER}/
- Project slug: From repo owner/name (e.g.
myorg-myapp). Avoids collisions when multiple repos have the same PR number. - Example:
http://SERVER_IP/myorg-myapp/pr-12/ - nginx proxies to
http://localhost:{APP_PORT}/ - Path prefix is stripped in proxy configuration
pr-123.server.com) requires DNS configuration.
Security
Webhook Security
- Signature Verification: HMAC SHA256 verification of webhook payloads
- Repository Whitelist: Only allowed repositories can trigger deployments
- Input Sanitization: PR numbers and branch names are validated
Container Security
- Resource Limits: CPU and memory limits per container
- Non-root Users: Containers run as non-root users
- Network Isolation: Containers are isolated on Docker network
- Health Checks: Containers must pass health checks before being routed
Infrastructure Security
- SSH Key Authentication: Only SSH key access to droplet
- Firewall Rules: Only necessary ports open (22, 80, 443)
- Internal API: Orchestrator API not exposed publicly (internal port 3000)
- Keychain Storage: Sensitive tokens stored in OS keychain
Deployment Tracking
Deployments are tracked in a JSON file (/opt/preview-deployer/deployments.json). Keys are deployment ids ({projectSlug}-{prNumber}):
Scaling Considerations
Current Limitations
- Single Server: All previews run on one droplet
- Resource Limits: Limited by droplet size
- Port Range: Maximum ~56k PRs per repo
Future Scaling Options
- Horizontal Scaling: Multiple droplets with load balancer
- Kubernetes: Container orchestration for better resource management
- Subdomain Routing: DNS-based routing instead of path-based
- Database Pooling: Shared database instances for cost savings
- Caching: Docker image caching for faster builds
Monitoring
Current Monitoring
- Logs: Orchestrator logs to
/opt/preview-deployer/logs/ - Systemd: Service status via
systemctl status - Health Endpoint:
/healthendpoint for basic health checks
Future Monitoring
- Metrics: Prometheus metrics export
- Alerting: Alertmanager integration
- Dashboard: Grafana dashboard for visualization
- Tracing: Distributed tracing for request flows
Error Handling
Retry Logic
- Transient Failures: Max 3 retries with exponential backoff
- Health Checks: 60-second timeout with 5-second intervals
- GitHub API: Automatic retry on rate limits
Error Reporting
- GitHub Comments: Failure comments posted to PRs
- Logging: Comprehensive error logging with context
- Status Tracking: Deployment status tracked (building/running/failed)
Cost Management
Resource Limits
- Default TTL: 7 days
- Max Concurrent: 10 previews (configurable)
- Container Limits: 512MB RAM, 0.5 CPU per container
Cost Optimization
- Auto Cleanup: Stale previews cleaned automatically
- Resource Limits: Prevent resource exhaustion
- Efficient Builds: Docker layer caching
- Small Droplets: Use smallest droplet size that fits needs