LogTide

Migrate from Splunk

Medium
6-12 hours

Replace Splunk's expensive licensing model with LogTide's self-hosted solution. Get native Sigma rules support for security detection without vendor lock-in.

Why Migrate from Splunk?

Eliminate License Costs

Splunk charges per GB/day indexed. Enterprise customers often pay $50K-$500K+/year. LogTide is open-source with only infrastructure costs.

Sigma Rules (Industry Standard)

Replace Splunk's proprietary SPL with standard Sigma detection rules. Access 2000+ community rules from SigmaHQ.

Simpler Architecture

No more indexer clusters, search heads, or deployment servers. LogTide runs as a single Docker Compose stack.

No Data Limits

No daily indexing limits. Ingest as much data as your infrastructure can handle without worrying about license overages.

Feature Comparison

Feature Splunk LogTide
Log Ingestion HEC, Forwarders HTTP API, SDKs, OTLP
Query Language SPL (proprietary) REST API + Full-text
Full-text Search Yes Yes
Real-time Streaming Yes SSE
Alerts Yes Yes
Detection Rules Splunk ES (extra license) Sigma (included)
MITRE ATT&CK Splunk ES Included
Incident Management Splunk ES / SOAR Included
OpenTelemetry Partial Native OTLP
Self-hosted Yes (licensed) Yes (free)
Pricing $150-$1800/GB/day Infrastructure only

Step 1: Inventory Your Splunk Setup

Document your existing Splunk configuration:

What to Document

  • Data inputs: Universal Forwarders, HEC endpoints, scripted inputs
  • Indexes: List all indexes and their retention settings
  • Saved searches: Export scheduled searches and alerts
  • Dashboards: Document key dashboards and visualizations
  • Props/transforms: Document field extractions and parsing rules

Export Splunk configuration using the REST API:

# Export saved searches (alerts)
curl -k -u admin:password "https://splunk:8089/servicesNS/-/-/saved/searches?output_mode=json" > saved_searches.json

# Export dashboards
curl -k -u admin:password "https://splunk:8089/servicesNS/-/-/data/ui/views?output_mode=json" > dashboards.json

# List all indexes
curl -k -u admin:password "https://splunk:8089/services/data/indexes?output_mode=json"

Step 2: Deploy LogTide

See the Deployment Guide for full instructions. Quick start:

# Clone LogTide
git clone https://github.com/logtide-dev/logtide.git
cd logtide/docker

# Configure
cp .env.example .env
# Edit .env with your settings

# Start
docker compose up -d

# Verify
curl http://localhost:8080/health

Create your organization and project via the UI, then generate an API key.

Step 3: Replace Universal Forwarder

Replace Splunk Universal Forwarder with Fluent Bit to send logs to LogTide.

Before (Splunk)
# inputs.conf
[monitor:///var/log/app/*.log]
index = main
sourcetype = app_logs

# outputs.conf
[tcpout]
defaultGroup = splunk_indexers

[tcpout:splunk_indexers]
server = splunk-indexer:9997
After (Fluent Bit)
[SERVICE]
  Flush         1
  Log_Level     info

[INPUT]
  Name          tail
  Path          /var/log/app/*.log
  Tag           app.*

[OUTPUT]
  Name          http
  Match         *
  Host          logtide.internal
  Port          8080
  URI           /api/v1/ingest
  Format        json
  Header        X-API-Key lp_xxx

HEC Migration

If you're using Splunk HTTP Event Collector, migrate to LogTide's HTTP API:

Before (Splunk HEC)
curl -X POST  
"https://splunk:8088/services/collector"  
-H "Authorization: Splunk HEC_TOKEN"  
-d '{ 
  "event": "User logged in", 
  "sourcetype": "app_logs", 
  "index": "main"
}'
After (LogTide API)
curl -X POST  
"http://logtide:8080/api/v1/ingest"  
-H "X-API-Key: lp_xxx"  
-H "Content-Type: application/json"  
-d '{ 
  "logs": [{
    "service": "app",
    "level": "info",
    "message": "User logged in"
  }]
}'

Step 4: Query Migration (SPL to LogTide)

Splunk uses SPL (Search Processing Language). LogTide uses REST API parameters. Here's how to translate common SPL queries:

SPL Query LogTide API
index=main sourcetype=app_logs GET /api/v1/logs?service=app
index=main level=ERROR GET /api/v1/logs?level=error
index=main "connection failed" GET /api/v1/logs?q=connection%20failed
index=main earliest=-1h GET /api/v1/logs?from=2025-01-15T11:00:00Z
index=main | stats count by host GET /api/v1/logs/aggregated?interval=1h

Step 5: Alert Migration

Convert Splunk saved searches (alerts) to LogTide alert rules:

Splunk Alert
# savedsearches.conf
[High Error Rate]
search = index=main level=ERROR
| stats count
| where count > 100
cron_schedule = */5 * * * *
alert_type = number of events
alert_threshold = 100
action.email.to = [email protected]
LogTide Alert Rule
{
"name": "High Error Rate",
"enabled": true,
"level": ["error"],
"threshold": 100,
"timeWindow": 5,
"emailRecipients": [
  "[email protected]"
]
}

Step 6: Security Detection Migration

If you're using Splunk Enterprise Security, migrate to LogTide's Sigma-based detection:

Benefits of Sigma Rules

  • Industry standard format (not vendor-locked)
  • 2000+ community rules from SigmaHQ
  • MITRE ATT&CK mapping included
  • No additional licensing required

Example Sigma rule for detecting suspicious PowerShell:

title: Suspicious PowerShell Command
status: stable
level: high
logsource:
  category: process_creation
  product: windows
detection:
  selection:
      CommandLine|contains:
          - '-enc'
          - '-EncodedCommand'
          - 'IEX'
          - 'Invoke-Expression'
  condition: selection
tags:
  - attack.execution
  - attack.t1059.001

Import Sigma rules via the LogTide UI at /dashboard/security/sigma.

Concept Mapping

Splunk Term LogTide Equivalent Notes
Index Project One Splunk index = One LogTide project
Sourcetype Service Use service field to differentiate log sources
Host metadata.host Store in metadata JSON field
Source metadata.source Store in metadata JSON field
Universal Forwarder Fluent Bit / SDK Use Fluent Bit or application SDK
HEC POST /api/v1/ingest HTTP API endpoint
Saved Search Alert Rule Threshold-based alerts
Enterprise Security Sigma Rules + SIEM Built-in, no extra license
props.conf / transforms.conf N/A (auto JSON parsing) Send structured JSON logs

Common Issues

Field extraction differences
Splunk auto-extracts fields with props.conf. LogTide expects JSON logs. Use Fluent Bit parsers to structure logs before sending, or update your application to emit JSON.
SPL queries don't translate directly
Complex SPL with pipes, stats, and evals need to be rethought. Use LogTide's aggregation API for time-series stats. For complex transformations, consider processing in your application.
Missing _time field
LogTide uses time field (ISO 8601 format), not _time. Ensure your log shipper sets the correct timestamp field.

Next Steps