Deployment Guide
Deploy LogTide on your infrastructure using pre-built Docker images or build from source.
Pre-built Images (Recommended)
Quick Start (2 Minutes)
# Create project directory
mkdir logtide && cd logtide
# Download docker-compose.yml and environment template
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/docker-compose.yml
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/.env.example
mv .env.example .env
# Edit .env with secure passwords
nano .env
# Start LogTide
docker compose up -d Required Environment Variables
| Variable | Description | Example |
|---|---|---|
| DB_PASSWORD | PostgreSQL password | random_secure_password |
| REDIS_PASSWORD | Redis password (optional) | another_secure_password |
| API_KEY_SECRET | Encryption key (32+ chars) | your_32_character_secret_key_here |
Database migrations run automatically on first start.
(Optional) Docker Log Collection with Fluent Bit
To automatically collect logs from all Docker containers using Fluent Bit:
# Download Fluent Bit configuration files
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/fluent-bit.conf
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/parsers.conf
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/extract_container_id.lua
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/wrap_logs.lua
# Add your LogTide API key to .env
echo "FLUENT_BIT_API_KEY=your_api_key_here" >> .env
# Start with logging profile enabled
docker compose --profile logging up -d This profile is optional. Without it, LogTide runs without the Fluent Bit container.
ARM64 / Raspberry Pi Support
LogTide images (logtide/backend, logtide/frontend) are built for both linux/amd64 and linux/arm64,
so they work natively on Raspberry Pi 3/4/5 (64-bit OS).
Fluent Bit on ARM64: The default Fluent Bit image may have limited ARM64 support.
You can specify an alternative image in your .env file:
# For Raspberry Pi / ARM64, use the official multi-arch registry
FLUENT_BIT_IMAGE=cr.fluentbit.io/fluent/fluent-bit:4.2.2
# Or build your own ARM64 image
FLUENT_BIT_IMAGE=myregistry/fluent-bit-arm64:latest Raspberry Pi 4 (4GB+) or Pi 5 is recommended for homelab/small deployments. For high-volume workloads, consider x86 hardware or our managed cloud.
Available Docker Images
| Image | Registry |
|---|---|
| logtide/backend | Docker Hub |
| logtide/frontend | Docker Hub |
| ghcr.io/logtide-dev/logtide-backend | GitHub Container Registry |
| ghcr.io/logtide-dev/logtide-frontend | GitHub Container Registry |
Always pin to a specific version in production instead of using latest:
# In your .env file
LOGTIDE_BACKEND_IMAGE=logtide/backend:0.6.3
LOGTIDE_FRONTEND_IMAGE=logtide/frontend:0.6.3 Ready to Go
Frontend: http://localhost:3000 | API: http://localhost:8080
Simplified Deployment (No Redis)
Starting with v0.5.0, LogTide can run without Redis for simpler deployments.
When REDIS_URL is not configured, LogTide automatically uses PostgreSQL-based alternatives:
- Job queues:
graphile-worker(PostgreSQL-based) instead of BullMQ - Live tail streaming: PostgreSQL
LISTEN/NOTIFYinstead of Redis pub/sub - Rate limiting: In-memory fallback (per-instance)
- Caching: Disabled (queries go directly to database)
When to Use Simplified Deployment
- • Home labs and personal projects
- • Development environments
- • Single-instance deployments
- • Resource-constrained systems (Raspberry Pi)
- • Low to medium log volume (<1000 logs/sec)
- • Horizontal scaling (multiple backend instances)
- • High log volume (>1000 logs/sec sustained)
- • Production with high availability requirements
- • When you need query result caching
- • Distributed rate limiting across instances
Quick Start (Simplified)
# Create project directory
mkdir logtide && cd logtide
# Download simplified docker-compose (no Redis)
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/docker-compose.simple.yml
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/.env.example
mv .env.example .env
# Edit .env - only DB_PASSWORD and API_KEY_SECRET required
nano .env
# Start LogTide (fewer containers, less memory)
docker compose -f docker-compose.simple.yml up -d Required Environment Variables (Simplified)
| Variable | Description | Required |
|---|---|---|
| DB_PASSWORD | PostgreSQL password | Yes |
| API_KEY_SECRET | Encryption key (32+ chars) | Yes |
| REDIS_URL | Not set = uses PostgreSQL alternatives | No |
You can add Redis at any time by setting REDIS_URL in your .env file
and switching to the full docker-compose.yml. No data migration required - Redis is only used for
ephemeral data (cache, rate limits, job queues).
Remote Deployment
LogTide automatically detects the correct API URL based on how you access the frontend:
- Via IP:3000 (e.g.,
http://192.168.1.100:3000) → API auto-detected athttp://192.168.1.100:8080 - Via domain on port 80/443 (e.g.,
https://logtide.example.com) → Uses relative URLs (assumes reverse proxy)
No PUBLIC_API_URL configuration needed for these scenarios!
Example: VPS Deployment (No Config Needed)
# Server IP: 192.168.1.100
# .env configuration - no PUBLIC_API_URL needed!
DB_PASSWORD=secure_password
REDIS_PASSWORD=secure_password
API_KEY_SECRET=your_32_character_secret_key_here
# Access points:
# Frontend: http://192.168.1.100:3000
# API: http://192.168.1.100:8080 (auto-detected) With Reverse Proxy (nginx/Traefik)
When using a domain (port 80/443), LogTide assumes a reverse proxy is in place and uses relative URLs (/api/).
Your reverse proxy must route both the frontend and the API, otherwise API calls will fail with 404.
Example nginx configuration:
server {
listen 443 ssl;
server_name logtide.example.com;
# Frontend
location / {
proxy_pass http://localhost:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# Backend API - REQUIRED for relative URLs to work
location /api/ {
proxy_pass http://localhost:8080/api/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
# SSE for live tail (requires special headers)
location /api/v1/logs/stream {
proxy_pass http://localhost:8080/api/v1/logs/stream;
proxy_set_header Connection '';
proxy_http_version 1.1;
chunked_transfer_encoding off;
proxy_buffering off;
proxy_cache off;
}
} Alternative: API on Subdomain
If you prefer to host the API on a separate subdomain instead of proxying /api/:
# .env configuration
PUBLIC_API_URL=https://api.logtide.example.com
# Access points:
# Frontend: https://logtide.example.com
# API: https://api.logtide.example.com | Scenario | PUBLIC_API_URL | Notes |
|---|---|---|
| Docker via IP:3000 | (not needed) | Auto-detected → IP:8080 |
| Domain + reverse proxy | (not needed) | Uses /api/v1 (proxy must route it) |
| API on subdomain | https://api.example.com | Explicit configuration |
| Custom port setup | http://host:custom-port | When backend is not on :8080 |
| LogTide Cloud | https://api.logtide.dev | Pre-configured |
Horizontal Scaling
Horizontal scaling requires Redis for shared state across instances (rate limiting, sessions, job distribution). The simplified deployment (no Redis) only supports single-instance deployments.
Enable Horizontal Scaling
The default docker-compose.yml runs a single instance of each service.
For horizontal scaling, download and use the Traefik overlay:
# Download Traefik overlay (adds load balancer)
curl -O https://raw.githubusercontent.com/logtide-dev/logtide/main/docker/docker-compose.traefik.yml
# Start with horizontal scaling support
docker compose -f docker-compose.yml -f docker-compose.traefik.yml up -d
# Scale to 3 backend instances and 2 workers
docker compose -f docker-compose.yml -f docker-compose.traefik.yml up -d --scale backend=3 --scale worker=2
# Check running instances
docker compose ps When using the Traefik overlay, access changes to a single port:
- With Traefik:
http://localhost:3080(frontend + API on same port) - Without Traefik: Frontend at
:3000, API at:8080
The LOGTIDE_PORT environment variable controls the Traefik port (default: 3080).
Architecture
| Component | Default | With Traefik | Notes |
|---|---|---|---|
| Traefik | - | 1 instance | Load balancer, reverse proxy |
| Backend | 1 instance | N instances | Stateless API servers |
| Worker | 1 instance | N instances | Background job processors (BullMQ) |
| Frontend | 1 instance | N instances | SvelteKit SSR |
| Redis | Optional | 1 instance (required) | Rate limiting, job queues, cache. Required for scaling. |
| PostgreSQL | 1 instance | 1 instance | TimescaleDB for time-series data |
- Rate limiting: Stored in Redis (shared across all backend instances)
- Sessions: Stored in Redis (no sticky sessions required)
- Job queues: BullMQ distributes work across all workers automatically
- Health checks: Traefik removes unhealthy instances from rotation
Without Redis, rate limiting is per-instance and job queues use PostgreSQL (graphile-worker), which doesn't support multi-instance scaling.
Kubernetes (Helm)
Quick Install
# Add the LogTide Helm repository
helm repo add logtide https://logtide-dev.github.io/logtide-helm-chart
helm repo update
# Install LogTide
helm install logtide logtide/logtide --namespace logtide --create-namespace --set timescaledb.auth.password=<your-db-password> --set redis.auth.password=<your-redis-password> What's Included
- Backend API (2+ replicas)
- Frontend (2+ replicas)
- Worker (BullMQ jobs)
- TimescaleDB StatefulSet
- Redis StatefulSet
- Horizontal Pod Autoscaler
- Ingress (nginx, ALB, etc.)
- ServiceMonitor (Prometheus)
Enable Ingress
helm install logtide logtide/logtide --namespace logtide --create-namespace --set timescaledb.auth.password=<password> --set redis.auth.password=<password> --set ingress.enabled=true --set ingress.className=nginx --set ingress.hosts[0].host=logtide.example.com --set ingress.hosts[0].paths[0].path=/ --set ingress.hosts[0].paths[0].pathType=Prefix --set ingress.hosts[0].paths[0].service=frontend Use External Database
For production, you can use an external managed database (AWS RDS, Cloud SQL, etc.):
helm install logtide logtide/logtide --namespace logtide --create-namespace --set timescaledb.enabled=false --set externalDatabase.host=your-db.region.rds.amazonaws.com --set externalDatabase.port=5432 --set externalDatabase.database=logtide --set externalDatabase.username=logtide --set externalDatabase.password=<password> --set redis.auth.password=<password> Cloud-Specific Examples
AWS EKS
# values-eks.yaml
global:
storageClass: gp3
ingress:
enabled: true
className: alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip GCP GKE
# values-gke.yaml
global:
storageClass: standard-rwo
ingress:
enabled: true
className: gce - Artifact Hub → Browse chart versions and values
- GitHub Repository → Source code and issues
Build from Source (Alternative)
Clone and Build
# Clone the repository
git clone https://github.com/logtide-dev/logtide.git
cd logtide/docker
# Copy environment template
cp ../.env.example .env
# Edit .env with your configuration
nano .env
# Build and start all services
docker compose up -d --build Services Running
Access LogTide at http://your-server-ip:3000
Monitoring & Maintenance
Health Checks
# Check all services status
docker compose ps
# Check backend health
curl http://localhost:8080/health
# With Traefik overlay
curl http://localhost:3080/health
# Check database
docker compose exec postgres psql -U logtide -d logtide -c "SELECT COUNT(*) FROM logs;" Common Commands
# Restart a service
docker compose restart backend
# View service logs
docker compose logs --tail=100 -f backend
# Stop all services
docker compose down
# Update to latest version
docker compose pull
docker compose up -d Database Backup
# Create backup
docker compose exec postgres pg_dump -U logtide logtide > backup_$(date +%Y%m%d).sql
# Restore from backup
docker compose exec -T postgres psql -U logtide logtide < backup_20250115.sql