LogTide

Migrate from Datadog

Medium
4-8 hours

Migrate from Datadog's proprietary platform to LogTide and save up to 90% on log management costs while gaining full data ownership and built-in SIEM capabilities.

Why Migrate from Datadog?

Massive Cost Savings

Datadog charges $0.10-$1.70/GB for log ingestion. A 500 GB/day deployment can cost $15,000+/month. LogTide is self-hosted with zero per-GB fees.

Full Data Ownership

Your logs never leave your infrastructure. No data sent to third parties. Full GDPR compliance with EU data sovereignty.

Built-in SIEM

Sigma detection rules, threat detection, and incident management included. Datadog Cloud SIEM costs extra ($0.20/GB on top of log costs).

Unlimited Users

No per-seat licensing. Add your entire team without worrying about per-user costs or role-based pricing tiers.

Feature Comparison

Feature Datadog LogTide
Log Ingestion (HTTP API) Yes Yes
SDKs (Node.js, Python, etc.) Yes Yes
OpenTelemetry Support Partial (Logs only) Native OTLP
Full-text Search Yes Yes
Real-time Streaming Yes SSE
Alert Rules Yes Yes
Email/Webhook Notifications Yes Yes
Trace Correlation Yes Yes
Sigma Rules (SIEM) No Built-in
Incident Management Cloud SIEM ($0.20/GB) Included
MITRE ATT&CK Mapping Cloud SIEM Included
Self-hosted Option No Yes
Pricing $0.10-$1.70/GB + per-user Infrastructure only

Step 1: Inventory Your Datadog Setup

Before migrating, document your existing Datadog configuration:

What to Document

  • Log sources: List all services/hosts sending logs to Datadog
  • Log volume: Check your usage dashboard for average GB/day
  • Active monitors: Export all log-based monitors via API
  • Dashboards: Screenshot or export critical dashboards
  • Log pipelines: Document parsing rules and processors

Export your Datadog configuration using the API:

# Export all log monitors
curl -X GET "https://api.datadoghq.com/api/v1/monitor" -H "DD-API-KEY: $DD_API_KEY" -H "DD-APPLICATION-KEY: $DD_APP_KEY" > monitors.json

# Export all dashboards
curl -X GET "https://api.datadoghq.com/api/v1/dashboard" -H "DD-API-KEY: $DD_API_KEY" -H "DD-APPLICATION-KEY: $DD_APP_KEY" > dashboards.json

Step 2: Deploy LogTide

Follow the Deployment Guide to set up LogTide. Here's a quick start:

# Clone and configure
git clone https://github.com/logtide-dev/logtide.git
cd logtide/docker

# Copy and edit environment
cp .env.example .env
# Edit .env: Set PUBLIC_API_URL, database passwords, etc.

# Start LogTide
docker compose up -d

# Verify deployment
curl http://localhost:8080/health

Recommended Specs (for 500 GB/day)

CPU: 8 cores
RAM: 32 GB
Disk: 2 TB SSD
Network: 1 Gbps

After deployment, create your organization and project via the UI at http://localhost:3000, then generate an API key from the project settings.

Step 3: SDK Migration

Replace Datadog SDK with LogTide SDK. The API is similar, making migration straightforward.

Node.js Migration

Before (Datadog)
import { datadogLogs } from '@datadog/browser-logs';

datadogLogs.init({
clientToken: 'pub_xxx',
site: 'datadoghq.com',
service: 'my-app',
env: 'production',
});

datadogLogs.logger.info('User logged in', {
userId: 123,
email: '[email protected]'
});
After (LogTide)
import { LogTideClient } from '@logtide-dev/sdk-node';

const client = new LogTideClient({
apiUrl: 'http://logtide.internal:8080',
apiKey: 'lp_xxx',
globalMetadata: { env: 'production' }
});

client.info('my-app', 'User logged in', {
userId: 123,
email: '[email protected]'
});

Python Migration

Before (Datadog)
from datadog import initialize, statsd
from ddtrace import tracer

initialize(api_key='xxx', app_key='yyy')

@tracer.wrap()
def process_request():
  statsd.increment('requests')
  # Datadog auto-instruments logs
After (LogTide)
from logtide_sdk import LogTideClient

client = LogTideClient(
  api_url='http://logtide.internal:8080',
  api_key='lp_xxx',
  global_metadata={'env': 'production'}
)

def process_request():
  client.info('api', 'Processing request')
  # Your business logic

Method Mapping

Datadog LogTide
logger.debug(message, context) client.debug(service, message, metadata)
logger.info(message, context) client.info(service, message, metadata)
logger.warn(message, context) client.warn(service, message, metadata)
logger.error(message, context) client.error(service, message, metadata)
N/A client.critical(service, message, metadata)

Step 4: Alert Migration

Convert your Datadog monitors to LogTide alert rules. Here's how the formats map:

Datadog Monitor
{
"name": "High Error Rate",
"type": "log alert",
"query": "status:error service:api",
"message": "Error rate exceeded",
"options": {
  "thresholds": { "critical": 100 },
  "evaluation_delay": 60
}
}
LogTide Alert Rule
{
"name": "High Error Rate",
"enabled": true,
"service": "api",
"level": ["error"],
"threshold": 100,
"timeWindow": 5,
"emailRecipients": ["[email protected]"],
"webhookUrl": "https://hooks.slack.com/..."
}

Create alert rules via the LogTide API:

curl -X POST "http://logtide.internal:8080/api/v1/alerts" -H "Authorization: Bearer YOUR_SESSION_TOKEN" -H "Content-Type: application/json" -d '{ 
  "organizationId": "your-org-id",
  "projectId": "your-project-id",
  "name": "High Error Rate",
  "enabled": true,
  "service": "api",
  "level": ["error"],
  "threshold": 100,
  "timeWindow": 5,
  "emailRecipients": ["[email protected]"]
}'

Step 5: Parallel Ingestion (Validation)

Run both platforms in parallel for 24-48 hours to validate data consistency before cutover.

Dual Ingestion Example
import { datadogLogs } from '@datadog/browser-logs';
import { LogTideClient } from '@logtide-dev/sdk-node';

// Initialize both
datadogLogs.init({ clientToken: 'xxx', site: 'datadoghq.com' });
const logtide = new LogTideClient({
apiUrl: 'http://logtide.internal:8080',
apiKey: 'lp_xxx'
});

// Wrapper to send to both
function log(level: string, service: string, message: string, meta?: object) {
// Send to Datadog
datadogLogs.logger[level](message, { service, ...meta });

// Send to LogTide
logtide[level](service, message, meta);
}

// Usage
log('info', 'api', 'Request processed', { userId: 123 });

Validation Checklist

  • Compare log counts in both platforms (should match within 1%)
  • Verify search results return the same logs
  • Test alert triggers on both platforms
  • Confirm notification delivery (email, Slack, webhooks)

Step 6: Cutover & Cleanup

Once validated, complete the migration:

  1. 1 Update production configs to use LogTide SDK only (remove Datadog SDK)
  2. 2 Remove Datadog Agent from all hosts (if using infrastructure monitoring, consider alternatives)
  3. 3 Update team runbooks and documentation to reference LogTide URLs
  4. 4 Cancel Datadog subscription after retention period expires

Concept Mapping

Datadog Term LogTide Equivalent Notes
Organization Organization 1:1 mapping
Index Project Logs are scoped to projects
Service Service 1:1 mapping
Log Pipeline N/A (automatic) LogTide auto-parses JSON
Monitor Alert Rule Similar functionality
Dashboard SIEM Dashboard Security-focused dashboards
API Key API Key (per project) Prefix: lp_
Cloud SIEM Sigma Rules + Incidents Included at no extra cost

Common Issues

Logs not appearing in LogTide
  • Verify API key is valid and has write permissions
  • Check API URL is accessible from your application
  • Ensure Content-Type header is application/json
  • Check rate limits (default: 200 req/min per API key)
Timestamp mismatch
LogTide expects ISO 8601 timestamps in UTC. Datadog may use Unix epoch. Ensure your SDK is sending ISO format: 2025-01-15T12:00:00Z
Missing service name
In Datadog, service is often auto-detected. In LogTide, you must explicitly pass the service name as the first argument: client.info('my-service', 'message')

Next Steps