LogTide

Migrate from ELK Stack

Easy
3-6 hours

Simplify your logging infrastructure by replacing the complex ELK (Elasticsearch, Logstash, Kibana) stack with LogTide's all-in-one solution. No more cluster management or version compatibility issues.

Why Migrate from ELK?

Dramatically Simpler

No more managing Elasticsearch clusters, Logstash pipelines, and Kibana. LogTide is a single Docker Compose deployment.

Lower Resource Usage

Elasticsearch requires significant RAM (heap size). LogTide with TimescaleDB uses resources more efficiently with better compression.

Built-in SIEM

ELK requires additional components (SIEM, Security) for threat detection. LogTide includes Sigma rules and incident management.

No Version Headaches

ELK components must be version-matched. LogTide is a single versioned release with all components guaranteed compatible.

Feature Comparison

Feature ELK Stack LogTide
Components 3+ (ES, Logstash, Kibana) Single stack
Log Ingestion Beats, Logstash HTTP API, SDKs, OTLP
Query Language Lucene / KQL REST API + Full-text
Full-text Search Yes Yes
Real-time Streaming Kibana Discover SSE
Alerting Watcher / ElastAlert Built-in
Security Detection Elastic SIEM (paid) Sigma (included)
OpenTelemetry APM Server Native OTLP
Cluster Management Complex (shards, replicas) Simple (PostgreSQL)
Memory Requirements High (ES heap: 16-32GB) Moderate (4-8GB)
Pricing Open-source + paid features Fully open-source

Step 1: Inventory Your ELK Setup

Document your existing ELK configuration:

What to Document

  • Data shippers: Filebeat, Metricbeat, Logstash pipelines
  • Indices: List indices and their mappings
  • Logstash pipelines: Document filter/output configs
  • Kibana dashboards: Export saved objects
  • Watcher/alerts: Document alert configurations

Export Kibana saved objects:

# Export all saved objects from Kibana
curl -X POST "http://kibana:5601/api/saved_objects/_export" -H "kbn-xsrf: true" -H "Content-Type: application/json" -d '{ 
  "type": ["dashboard", "visualization", "search"],
  "includeReferencesDeep": true
}' > kibana_export.ndjson

Step 2: Deploy LogTide

See the Deployment Guide. LogTide requires far fewer resources than ELK:

ELK Stack Requirements
  • Elasticsearch: 16-32 GB RAM (heap)
  • Logstash: 4-8 GB RAM
  • Kibana: 2-4 GB RAM
  • Total: 22-44 GB RAM minimum
LogTide Requirements
  • Backend: 2-4 GB RAM
  • TimescaleDB: 4-8 GB RAM
  • Redis: 1 GB RAM
  • Total: 7-13 GB RAM
# Clone and start LogTide
git clone https://github.com/logtide-dev/logtide.git
cd logtide/docker
cp .env.example .env
docker compose up -d

# Verify
curl http://localhost:8080/health

Step 3: Replace Beats/Logstash

Replace Filebeat/Logstash with Fluent Bit or direct SDK integration:

Filebeat to Fluent Bit

Before (Filebeat)
# filebeat.yml
filebeat.inputs:
- type: log
  paths:
    - /var/log/app/*.log

output.elasticsearch:
hosts: ["elasticsearch:9200"]
index: "app-logs-%{+yyyy.MM.dd}"}
After (Fluent Bit)
# fluent-bit.conf
[INPUT]
  Name tail
  Path /var/log/app/*.log
  Tag app

[OUTPUT]
  Name http
  Match *
  Host logtide.internal
  Port 8080
  URI /api/v1/ingest
  Format json
  Header X-API-Key lp_xxx

Logstash Pipeline to Fluent Bit

Before (Logstash)
# logstash.conf
input {
beats { port => 5044 }
}

filter {
grok {
  match => { "message" => "%{TIMESTAMP_ISO8601:time} %{LOGLEVEL:level} %{GREEDYDATA:msg}" }
}
}

output {
elasticsearch {
  hosts => ["elasticsearch:9200"]
  index => "app-%{+YYYY.MM.dd}"
}
}
After (Fluent Bit)
# fluent-bit.conf
[INPUT]
  Name forward
  Listen 0.0.0.0
  Port 24224

[FILTER]
  Name parser
  Match *
  Key_Name message
  Parser app_log

[OUTPUT]
  Name http
  Match *
  Host logtide.internal
  Port 8080
  URI /api/v1/ingest
  Format json
  Header X-API-Key lp_xxx

Step 4: Query Migration (Elasticsearch to LogTide)

Elasticsearch Query DSL translates to LogTide REST API parameters:

Elasticsearch Query LogTide API
{"match": {"service": "api"}} GET /api/v1/logs?service=api
{"match": {"level": "error"}} GET /api/v1/logs?level=error
{"query_string": {"query": "timeout"}} GET /api/v1/logs?q=timeout
{"range": {"@timestamp": {"gte": "now-1h"}}} GET /api/v1/logs?from=2025-01-15T11:00:00Z
{"aggs": {"by_service": {"terms": {"field": "service"}}}} GET /api/v1/logs/aggregated?interval=1h

KQL (Kibana Query Language) Translation

KQL LogTide
service: api ?service=api
level: error OR level: critical ?level=error&level=critical
"connection timeout" ?q=connection%20timeout
service: api AND level: error ?service=api&level=error

Step 5: Alert Migration

Convert Elasticsearch Watcher or ElastAlert rules to LogTide alert rules:

Elasticsearch Watcher
{
"trigger": {
  "schedule": { "interval": "5m" }
},
"input": {
  "search": {
    "request": {
      "indices": ["app-*"],
      "body": {
        "query": {
          "match": { "level": "error" }
        }
      }
    }
  }
},
"condition": {
  "compare": {
    "ctx.payload.hits.total": { "gt": 100 }
  }
},
"actions": {
  "email_admin": {
    "email": {
      "to": "[email protected]"
    }
  }
}
LogTide Alert Rule
{
"name": "High Error Rate",
"enabled": true,
"level": ["error"],
"threshold": 100,
"timeWindow": 5,
"emailRecipients": [
  "[email protected]"
]
}

Concept Mapping

ELK Term LogTide Equivalent Notes
Index Project One index pattern = One project
Document Log entry 1:1 mapping
Field metadata key Store custom fields in metadata JSON
@timestamp time ISO 8601 format
Filebeat Fluent Bit / SDK Use Fluent Bit for file tailing
Logstash Fluent Bit / SDK Use Fluent Bit filters or preprocess in app
Kibana LogTide UI Built-in web interface
Watcher Alert Rules Simpler configuration
Elastic SIEM Sigma Rules + SIEM Dashboard Included at no extra cost

Common Issues

Complex Logstash filters
If you have complex grok patterns or ruby filters, consider:
  • Moving parsing to your application (emit structured JSON)
  • Using Fluent Bit parsers for simple patterns
  • Using OpenTelemetry Collector for advanced transformations
Kibana dashboard recreation
LogTide doesn't have Kibana-style drag-and-drop dashboards yet. Use the SIEM dashboard for security metrics, and the Query API for custom integrations with tools like Grafana.
Index lifecycle management
Elasticsearch ILM is replaced by TimescaleDB retention policies. Configure retention per-project in LogTide settings. Compression is automatic.

Next Steps