Deployment Guide
This guide covers deploying AuthOS with different database backends. The platform supports SQLite, PostgreSQL, and MySQL through compile-time feature flags.
Database Backend Support
AuthOS uses Rust feature flags to support multiple database backends. Each backend requires a specific build configuration.
Available Backends
| Backend | Feature Flag | Default | Production Ready |
|---|---|---|---|
| SQLite | db_sqlite |
Yes | Yes (single-server) |
| PostgreSQL | db_psql |
No | Yes |
| MySQL | db_mysql |
No | Yes |
Choosing a Backend
SQLite - Best for:
- Single-server deployments
- Development and testing
- Low-to-medium traffic (<10k users)
- Simple deployment without external database
PostgreSQL - Best for:
- Multi-server deployments
- High availability requirements
- High traffic (>10k users)
- Advanced database features
MySQL - Best for:
- Existing MySQL infrastructure
- Compatibility with MySQL tools
- Multi-server deployments
Building from Source
Build with SQLite (Default)
cargo build --release
# OR explicitly
cargo build --release --no-default-features --features db_sqlite
Build with PostgreSQL
cargo build --release --no-default-features --features db_psql
Build with MySQL
cargo build --release --no-default-features --features db_mysql
Binary Output
The compiled binary is named based on the selected backend:
target/release/sso_sqlite # SQLite build
target/release/sso_psql # PostgreSQL build
target/release/sso_mysql # MySQL build
Environment Configuration
Database Connection Strings
Set DATABASE_URL in your .env file to match your chosen backend:
SQLite
DATABASE_URL=sqlite:./data/data.db
PostgreSQL
DATABASE_URL=postgres://username:password@localhost:5432/sso
MySQL
DATABASE_URL=mysql://username:password@localhost:3306/sso
Complete .env Example
# Database Configuration
# IMPORTANT: DATABASE_URL must match the backend you built with
DATABASE_URL=sqlite:./data/data.db
# JWT Configuration (RS256)
JWT_PRIVATE_KEY_BASE64=your-base64-encoded-rsa-private-key
JWT_PUBLIC_KEY_BASE64=your-base64-encoded-rsa-public-key
JWT_KID=sso-key-2025-01-15
JWT_EXPIRATION_HOURS=24
# Platform OAuth (at least one required for admin login)
PLATFORM_GITHUB_CLIENT_ID=your-github-client-id
PLATFORM_GITHUB_CLIENT_SECRET=your-github-client-secret
# Optional: Additional OAuth providers
# PLATFORM_GOOGLE_CLIENT_ID=your-google-client-id
# PLATFORM_GOOGLE_CLIENT_SECRET=your-google-client-secret
# PLATFORM_MICROSOFT_CLIENT_ID=your-microsoft-client-id
# PLATFORM_MICROSOFT_CLIENT_SECRET=your-microsoft-client-secret
# Server Configuration
SERVER_HOST=0.0.0.0
SERVER_PORT=3000
BASE_URL=http://localhost:3000
# Platform Owner (auto-created on first start)
PLATFORM_OWNER_EMAIL=admin@example.com
# Encryption Key (for BYOO feature)
ENCRYPTION_KEY=your-64-character-hex-encryption-key
# Optional: Stripe Integration
# STRIPE_SECRET_KEY=sk_live_...
# STRIPE_WEBHOOK_SECRET=whsec_...
Critical: Encryption Key Management
The ENCRYPTION_KEY environment variable is critical for the BYOO (Bring Your Own OAuth) feature:
- Purpose: Encrypts and decrypts OAuth client secrets and SMTP passwords stored in the database
- Format: 64-character hexadecimal string (32 bytes, 256-bit key)
- Generation: Use
openssl rand -hex 32to generate a secure key
CRITICAL WARNINGS:
- Never lose this key: If the encryption key is lost or changed, all encrypted OAuth client secrets and SMTP passwords become permanently unrecoverable
- Never change this key: Changing the key will break authentication for all organizations using BYOO
- Back up this key: Store the encryption key securely with your database backups
- Environment-specific: Use different keys for development, staging, and production environments
Key Recovery: There is no recovery mechanism if the encryption key is lost. You would need to:
- Have all organizations re-enter their OAuth client secrets
- Have all organizations re-enter their SMTP passwords
- This effectively breaks the service for all BYOO users
Best Practices:
- Store the encryption key in a secure secrets manager (AWS Secrets Manager, HashiCorp Vault, etc.)
- Include the encryption key in your backup and disaster recovery procedures
- Document the key location and management procedures
- Restrict access to the encryption key to essential personnel only
Docker Deployment
Using Docker Compose
The docker-compose.yml file includes pre-configured services for each database backend.
SQLite Deployment
# Build and start
docker-compose up --build sso-sqlite
# Or in detached mode
docker-compose up --build -d sso-sqlite
Characteristics:
- Single container deployment
- Data persisted in volume:
sso-sqlite-data - No external database required
- Simplest deployment option
PostgreSQL Deployment
# Start PostgreSQL database first
docker-compose up -d postgres
# Wait for database to be ready, then start application
docker-compose up --build sso-psql
Characteristics:
- Two containers:
postgresandsso-psql - Database persisted in volume:
sso-postgres-data - Network isolation via
sso-network - Supports high availability
MySQL Deployment
# Start MySQL database first
docker-compose up -d mysql
# Wait for database to be ready, then start application
docker-compose up --build sso-mysql
Characteristics:
- Two containers:
mysqlandsso-mysql - Database persisted in volume:
sso-mysql-data - Network isolation via
sso-network - Supports replication
Custom Docker Build
Build for a specific database backend:
# SQLite
docker build --build-arg DATABASE_BACKEND=sqlite -t sso:sqlite .
# PostgreSQL
docker build --build-arg DATABASE_BACKEND=psql -t sso:psql .
# MySQL
docker build --build-arg DATABASE_BACKEND=mysql -t sso:mysql .
Docker Environment Variables
Override settings via environment variables in docker-compose.yml:
services:
sso-sqlite:
build:
args:
DATABASE_BACKEND: sqlite
environment:
DATABASE_URL: sqlite:/app/data/data.db
SERVER_PORT: 3000
JWT_EXPIRATION_HOURS: 24
volumes:
- sso-sqlite-data:/app/data
Database Migrations
Automatic Migration
Migrations run automatically on application startup. No manual intervention required.
Startup Sequence:
- Application starts
- Database connection established
- Pending migrations detected
- Migrations executed sequentially
- Application ready to serve requests
Migration Implementation
Migrations are implemented using SeaORM migrations and compiled into the binary:
- Location:
api/migration/src/(Rust files) - Format: SeaORM migration Rust code (not SQL files)
- Execution: Embedded in binary, runs automatically on startup
- Tracking: Migration state maintained in
_seaql_migrationstable
Migration Files
The project uses Rust-based migration files:
api/migration/src/
├── main.rs # Migration runner
├── lib.rs # Migration library
├── m20240101_000001_create_all_tables.rs
├── m20251116_000001_add_stripe_price_id_to_plans.rs
├── m20251117_132952_add_client_secret_hash_to_services.rs
└── m20251119_000001_fix_saml_active_key_constraint.rs
Migration Logging
Monitor startup logs to confirm successful migration:
Running database migrations...
Applied migration: m20240101_000001_create_all_tables
Applied migration: m20251116_000001_add_stripe_price_id_to_plans
Applied migration: m20251117_132952_add_client_secret_hash_to_services
Applied migration: m20251119_000001_fix_saml_active_key_constraint
Database migrations completed
Starting AuthOS API...
Important Notes
- Never run migrations manually - Migrations are compiled into the binary and run automatically
- Never alter the database schema directly - Create new migration files for schema changes
- Never look for SQL files - This project does not use SQL migration files
- Backup before major upgrades - Always backup production databases before deploying new versions
Production Deployment
Prerequisites
-
Database Backend Decision
- Choose SQLite, PostgreSQL, or MySQL
- Ensure database server is running (if not using SQLite)
-
Generate Secrets
# RSA keys for JWT openssl genrsa -out private.pem 2048 openssl rsa -in private.pem -pubout -out public.pem # Base64 encode JWT_PRIVATE_KEY_BASE64=$(base64 -i private.pem | tr -d '\n') JWT_PUBLIC_KEY_BASE64=$(base64 -i public.pem | tr -d '\n') # Encryption key ENCRYPTION_KEY=$(openssl rand -hex 32) -
Configure OAuth Provider
- Create OAuth app on GitHub, Google, or Microsoft
- Set callback URL:
https://your-domain.com/auth/admin/{provider}/callback - Note client ID and secret
Deployment Steps
1. Build Application
Choose your backend and build:
# For PostgreSQL
cargo build --release --no-default-features --features db_psql
2. Configure Environment
Create .env file with production values:
DATABASE_URL=postgres://sso:secure_password@db.internal:5432/sso
JWT_PRIVATE_KEY_BASE64=LS0tLS1CRUdJTiBSU0EgUF...
JWT_PUBLIC_KEY_BASE64=LS0tLS1CRUdJTiBQVUJMSUMg...
JWT_KID=sso-prod-key-2025
PLATFORM_GITHUB_CLIENT_ID=Ov23liAbc123...
PLATFORM_GITHUB_CLIENT_SECRET=1a2b3c4d5e...
PLATFORM_OWNER_EMAIL=admin@company.com
ENCRYPTION_KEY=a1b2c3d4e5f6...
BASE_URL=https://sso.company.com
3. Start Application
# Using compiled binary
./target/release/sso_psql
# Or using Docker
docker-compose up -d sso-psql
4. Verify Deployment
# Health check
curl http://localhost:3000/health
# Expected response
{"status":"ok","version":"0.1.0"}
# Database connectivity check
curl http://localhost:3000/health/ready
# Expected response
{"status":"ready","version":"0.1.0","database":"connected"}
Database-Specific Configuration
SQLite Optimization
SQLite is optimized for performance with:
WAL Mode: Write-Ahead Logging enabled
PRAGMA journal_mode=WAL;
Aggressive Checkpointing: Every 10 seconds via background job
PRAGMA wal_checkpoint(TRUNCATE);
Connection Pool: 4 worker threads in multi-threaded runtime
Best Practices:
- Mount database file on SSD for best performance
- Ensure sufficient disk space for WAL growth
- Regular backups of both
.dband.db-walfiles
PostgreSQL Configuration
Recommended postgresql.conf settings:
# Connection Settings
max_connections = 100
shared_buffers = 256MB
effective_cache_size = 1GB
maintenance_work_mem = 64MB
# Write Performance
wal_buffers = 16MB
checkpoint_completion_target = 0.9
random_page_cost = 1.1 # For SSD
# Logging
log_statement = 'all' # Development only
log_duration = on
Connection Pool:
- Default pool size: 10 connections
- Max connections per instance: 20
- Idle timeout: 5 minutes
MySQL Configuration
Recommended my.cnf settings:
[mysqld]
# InnoDB Settings
innodb_buffer_pool_size = 256M
innodb_log_file_size = 64M
innodb_flush_log_at_trx_commit = 2
# Connection Settings
max_connections = 100
wait_timeout = 300
# Character Set
character-set-server = utf8mb4
collation-server = utf8mb4_unicode_ci
Connection Pool:
- Default pool size: 10 connections
- Max connections per instance: 20
High Availability
Multi-Instance Deployment
The application supports horizontal scaling with shared database:
Load Balancer
|
+----------------+----------------+
| | |
Instance 1 Instance 2 Instance 3
| | |
+----------------+----------------+
|
PostgreSQL/MySQL
(with replication)
Requirements:
- PostgreSQL or MySQL (not SQLite)
- Shared database with replication
- Session state in database (stateless application)
- Load balancer with health checks
Configuration:
- Point all instances to same database
- Each instance runs background jobs (idempotent)
- Use health endpoints for load balancer checks
Database Replication
PostgreSQL Streaming Replication
# Primary server
wal_level = replica
max_wal_senders = 3
wal_keep_size = 64MB
# Standby server
primary_conninfo = 'host=primary port=5432 user=replication password=...'
hot_standby = on
MySQL Replication
# Primary server
server-id = 1
log_bin = /var/log/mysql/mysql-bin.log
binlog_format = ROW
# Replica server
server-id = 2
relay_log = /var/log/mysql/relay-bin
read_only = 1
Monitoring and Observability
Health Checks
Configure monitoring to poll health endpoints:
# Liveness probe (is server running?)
GET /health/live
# Returns: 200 OK
# Readiness probe (is database connected?)
GET /health/ready
# Returns: 200 OK or 503 Service Unavailable
Logs
Application logs to stdout/stderr. Configure log aggregation:
Docker:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
Kubernetes:
# Logs automatically collected by cluster
Prometheus Metrics
The application exposes Prometheus metrics at GET /metrics. Configure Prometheus to scrape this endpoint:
scrape_configs:
- job_name: 'sso'
static_configs:
- targets: ['sso-server:8080']
scrape_interval: 15s
metrics_path: /metrics
Key Metrics
| Metric | Type | Description |
|---|---|---|
sso_http_request_duration_seconds |
Histogram | HTTP request latency by route, method, status |
sso_db_pool_connections_total |
Gauge | Current database pool connections |
sso_db_pool_connections_idle |
Gauge | Idle database pool connections |
sso_db_pool_connections_max |
Gauge | Maximum configured pool connections |
sso_job_queue_depth |
Gauge | Pending background jobs |
sso_job_processing_duration_seconds |
Histogram | Background job execution latency |
sso_webhook_delivery_latency_seconds |
Histogram | Webhook delivery time |
sso_active_users_total |
Gauge | Total active users |
sso_login_failures_total |
Counter | Failed login attempts by reason |
Recommended Alerts
groups:
- name: sso-alerts
rules:
# High HTTP latency
- alert: SSOHighLatency
expr: histogram_quantile(0.95, rate(sso_http_request_duration_seconds_bucket[5m])) > 1
for: 5m
labels:
severity: warning
annotations:
summary: "SSO P95 latency exceeds 1 second"
# Database pool exhaustion
- alert: SSODatabasePoolExhaustion
expr: sso_db_pool_connections_idle / sso_db_pool_connections_total < 0.1
for: 2m
labels:
severity: critical
annotations:
summary: "Database connection pool nearly exhausted"
# Job queue backpressure
- alert: SSOJobQueueBacklog
expr: sso_job_queue_depth > 100
for: 5m
labels:
severity: warning
annotations:
summary: "Background job queue has significant backlog"
# High error rate
- alert: SSOHighErrorRate
expr: rate(sso_http_request_duration_seconds_count{status="5xx"}[5m]) / rate(sso_http_request_duration_seconds_count[5m]) > 0.05
for: 2m
labels:
severity: critical
annotations:
summary: "Error rate exceeds 5%"
Grafana Dashboard Queries
# Request rate by route
sum(rate(sso_http_request_duration_seconds_count[5m])) by (route)
# P50/P95/P99 latency
histogram_quantile(0.50, rate(sso_http_request_duration_seconds_bucket[5m]))
histogram_quantile(0.95, rate(sso_http_request_duration_seconds_bucket[5m]))
histogram_quantile(0.99, rate(sso_http_request_duration_seconds_bucket[5m]))
# Database pool utilization percentage
100 * (1 - sso_db_pool_connections_idle / sso_db_pool_connections_total)
# Job processing throughput
rate(sso_job_processing_duration_seconds_count[5m])
Database Monitoring
SQLite:
- Database file size
- WAL file size
- Checkpoint frequency
- Pool metrics via
sso_db_pool_connections_*
PostgreSQL:
- Connection count
- Query duration
- Cache hit ratio
- Replication lag (if using replication)
- Pool metrics via
sso_db_pool_connections_*
MySQL:
- Connection count
- Query duration
- Buffer pool usage
- Replication lag (if using replication)
- Pool metrics via
sso_db_pool_connections_*
Backup and Recovery
SQLite Backup
# Backup database (includes WAL)
sqlite3 data.db ".backup backup.db"
# Or use file copy (ensure WAL is checkpointed first)
cp data.db data.db.backup
cp data.db-wal data.db-wal.backup
Automated Backup:
#!/bin/bash
BACKUP_DIR="/backups"
DATE=$(date +%Y%m%d_%H%M%S)
sqlite3 /app/data/data.db ".backup $BACKUP_DIR/data_$DATE.db"
find $BACKUP_DIR -name "data_*.db" -mtime +7 -delete
PostgreSQL Backup
# Logical backup
pg_dump -U sso -h localhost sso > backup.sql
# Physical backup (recommended for large databases)
pg_basebackup -U replication -D /backup/base -Fp -Xs -P
Automated Backup:
#!/bin/bash
pg_dump -U sso -h localhost sso | gzip > /backups/sso_$(date +%Y%m%d).sql.gz
find /backups -name "sso_*.sql.gz" -mtime +7 -delete
MySQL Backup
# Logical backup
mysqldump -u sso -p sso > backup.sql
# Binary backup (recommended for large databases)
mysqlpump -u sso -p sso > backup.sql
Automated Backup:
#!/bin/bash
mysqldump -u sso -p${MYSQL_PASSWORD} sso | gzip > /backups/sso_$(date +%Y%m%d).sql.gz
find /backups -name "sso_*.sql.gz" -mtime +7 -delete
Troubleshooting
Build Errors
Error: undefined reference to ...
Cause: Incorrect feature flag or missing dependencies
Solution: Ensure you’re using the correct feature flag:
# Check features
cargo build --release --no-default-features --features db_psql
Connection Errors
Error: failed to connect to database
Cause: DATABASE_URL doesn’t match build backend
Solution: Verify DATABASE_URL matches your build:
# For PostgreSQL build, use postgres:// URL
DATABASE_URL=postgres://user:pass@host:5432/db
# For MySQL build, use mysql:// URL
DATABASE_URL=mysql://user:pass@host:3306/db
# For SQLite build, use sqlite: URL
DATABASE_URL=sqlite:./data.db
Migration Errors
Error: migration already applied
Cause: Database state inconsistent with migration history
Solution: Check migration state in database:
SELECT * FROM _sqlx_migrations;
Never manually alter migration tables. If corrupted, restore from backup.
Performance Issues
Symptom: Slow query performance
PostgreSQL:
-- Check slow queries
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
-- Check index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
WHERE idx_scan = 0;
MySQL:
-- Check slow queries
SELECT * FROM mysql.slow_log
ORDER BY query_time DESC
LIMIT 10;
-- Check index usage
SHOW INDEX FROM users;
Security Considerations
Environment Variables
- Never commit
.envto version control - Use secrets management (AWS Secrets Manager, HashiCorp Vault)
- Rotate JWT keys and encryption keys regularly
Database Access
- Use dedicated database user with minimal privileges
- Enable SSL/TLS for database connections
- Restrict database access to application servers only
Network Security
- Place database on private network
- Use firewall rules to restrict access
- Enable connection encryption (SSL/TLS)
Backup Security
- Encrypt backups at rest
- Store backups in separate location
- Restrict backup access to authorized personnel
Related Documentation
- Health Checks - Monitoring endpoints
- Background Jobs - System maintenance tasks
- Error Handling - Error responses and debugging