LogTide

OpenTelemetry Integration

Native OTLP Support

LogTide supports native OpenTelemetry Protocol (OTLP) for log ingestion, allowing you to send logs from any OpenTelemetry-instrumented application.

Overview

The OTLP endpoint accepts logs in both JSON and Protobuf formats, making it compatible with all OpenTelemetry SDKs and collectors. Both content types are fully supported with automatic format detection.

Endpoint Path: /v1/otlp/logs
Content Types: application/json, application/x-protobuf
Authentication: X-API-Key header

Full Endpoint URLs

LogTide Cloud: https://api.logtide.dev/v1/otlp/logs
Self-hosted (default): http://your-server:8080/v1/otlp/logs
Self-hosted (with TLS): https://your-server:443/v1/otlp/logs

Note: The default backend port is 8080 (HTTP). If you're using a reverse proxy with TLS, use port 443 (HTTPS).

Data Mapping

OTLP log records are automatically mapped to LogTide's format:

OTLP Field LogTide Field Notes
timeUnixNano time Converted to ISO 8601
severityNumber level Mapped to 5 levels (see below)
body.stringValue message Log message content
traceId trace_id Converted to UUID format
spanId span_id 16-character hex string
attributes metadata Stored as JSON object
resource.service.name service Extracted from resource

Severity Mapping

OTLP severity numbers (0-24) are mapped to LogTide levels:

1-8

debug

TRACE/DEBUG

9-12

info

INFO

13-16

warn

WARN

17-20

error

ERROR

21-24

critical

FATAL

Node.js Example

Install the required packages:

npm install @opentelemetry/sdk-node @opentelemetry/api-logs @opentelemetry/sdk-logs @opentelemetry/exporter-logs-otlp-http

Configure the OpenTelemetry SDK:

import { NodeSDK } from '@opentelemetry/sdk-node';
import { OTLPLogExporter } from '@opentelemetry/exporter-logs-otlp-http';
import { BatchLogRecordProcessor } from '@opentelemetry/sdk-logs';
import { Resource } from '@opentelemetry/resources';
import { ATTR_SERVICE_NAME } from '@opentelemetry/semantic-conventions';
import { logs, SeverityNumber } from '@opentelemetry/api-logs';

// Configure the OTLP exporter
// Self-hosted: http://your-server:8080/v1/otlp/logs
// Cloud: https://api.logtide.dev/v1/otlp/logs
const logExporter = new OTLPLogExporter({
url: 'http://your-server:8080/v1/otlp/logs',
headers: {
  'X-API-Key': 'your-api-key-here',
},
});

// Initialize the SDK
const sdk = new NodeSDK({
resource: new Resource({
  [ATTR_SERVICE_NAME]: 'my-service',
}),
logRecordProcessor: new BatchLogRecordProcessor(logExporter),
});

sdk.start();

// Get a logger and emit logs
const logger = logs.getLogger('my-logger');

logger.emit({
severityNumber: SeverityNumber.INFO,
severityText: 'INFO',
body: 'User logged in successfully',
attributes: {
  'user.id': '12345',
  'user.email': '[email protected]',
},
});

Python Example

Install the required packages:

pip install opentelemetry-sdk opentelemetry-exporter-otlp-proto-http

Configure the OpenTelemetry SDK:

from opentelemetry import _logs
from opentelemetry.sdk._logs import LoggerProvider, LoggingHandler
from opentelemetry.sdk._logs.export import BatchLogRecordProcessor
from opentelemetry.exporter.otlp.proto.http._log_exporter import OTLPLogExporter
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
import logging

# Configure the resource
resource = Resource.create({
  SERVICE_NAME: "my-python-service"
})

# Configure the OTLP exporter
# Self-hosted: http://your-server:8080/v1/otlp/logs
# Cloud: https://api.logtide.dev/v1/otlp/logs
exporter = OTLPLogExporter(
  endpoint="http://your-server:8080/v1/otlp/logs",
  headers={"X-API-Key": "your-api-key-here"},
)

# Set up the logger provider
logger_provider = LoggerProvider(resource=resource)
logger_provider.add_log_record_processor(BatchLogRecordProcessor(exporter))
_logs.set_logger_provider(logger_provider)

# Attach to Python's logging module
handler = LoggingHandler(
  level=logging.DEBUG,
  logger_provider=logger_provider,
)

logging.getLogger().addHandler(handler)
logging.getLogger().setLevel(logging.DEBUG)

# Now all logs will be sent to LogTide
logging.info("Application started", extra={"user.id": "12345"})
logging.warning("High memory usage", extra={"memory.percent": 85})
logging.error("Database connection failed", extra={"db.host": "localhost"})

Go Example

Install the required packages:

go get go.opentelemetry.io/otel
go get go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp
go get go.opentelemetry.io/otel/sdk/log

Configure the OpenTelemetry SDK:

package main

import (
  "context"
  "log"

  "go.opentelemetry.io/otel/exporters/otlp/otlplog/otlploghttp"
  "go.opentelemetry.io/otel/log/global"
  sdklog "go.opentelemetry.io/otel/sdk/log"
  "go.opentelemetry.io/otel/sdk/resource"
  semconv "go.opentelemetry.io/otel/semconv/v1.24.0"
)

func main() {
  ctx := context.Background()

  // Create the OTLP exporter
  // Self-hosted: your-server:8080 (with WithInsecure())
  // Cloud: api.logtide.dev (HTTPS by default)
  exporter, err := otlploghttp.New(ctx,
      otlploghttp.WithEndpoint("your-server:8080"),
      otlploghttp.WithURLPath("/v1/otlp/logs"),
      otlploghttp.WithInsecure(), // Remove for HTTPS
      otlploghttp.WithHeaders(map[string]string{
          "X-API-Key": "your-api-key-here",
      }),
  )
  if err != nil {
      log.Fatalf("failed to create exporter: %v", err)
  }

  // Create resource
  res, _ := resource.New(ctx,
      resource.WithAttributes(
          semconv.ServiceName("my-go-service"),
      ),
  )

  // Create logger provider
  provider := sdklog.NewLoggerProvider(
      sdklog.WithProcessor(sdklog.NewBatchProcessor(exporter)),
      sdklog.WithResource(res),
  )
  defer provider.Shutdown(ctx)

  global.SetLoggerProvider(provider)
}

OpenTelemetry Collector

You can use the OpenTelemetry Collector to aggregate logs from multiple services before sending to LogTide.

# otel-collector-config.yaml
receivers:
  otlp:
    protocols:
      grpc:
        endpoint: 0.0.0.0:4317
      http:
        endpoint: 0.0.0.0:4318

processors:
  batch:
    timeout: 1s
    send_batch_size: 100

exporters:
  otlphttp/logtide:
    # Self-hosted: http://your-server:8080
    # Cloud: https://api.logtide.dev
    endpoint: http://your-server:8080
    headers:
      X-API-Key: your-api-key-here
    tls:
      insecure: true  # Set to false for HTTPS

service:
  pipelines:
    logs:
      receivers: [otlp]
      processors: [batch]
      exporters: [otlphttp/logtide]

Docker Compose configuration:

version: '3.8'

services:
  otel-collector:
    image: otel/opentelemetry-collector-contrib:latest
    command: ['--config=/etc/otel-collector-config.yaml']
    volumes:
      - ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
    ports:
      - '4317:4317'  # OTLP gRPC
      - '4318:4318'  # OTLP HTTP

Fluent Bit Integration

Fluent Bit can forward logs to LogTide using the OpenTelemetry output plugin. Both JSON and Protobuf formats are supported:

[SERVICE]
  Flush        1
  Log_Level    info

[INPUT]
  Name         tail
  Path         /var/log/app/*.log
  Tag          app.*

[OUTPUT]
  Name         opentelemetry
  Match        *
  # Self-hosted (HTTP): Host your-server, Port 8080, Tls Off
  # Cloud (HTTPS): Host api.logtide.dev, Port 443, Tls On
  Host         your-server
  Port         8080
  Uri          /v1/otlp/logs
  Log_response_payload True
  Tls          Off
  Header       X-API-Key your-api-key-here

Note: LogTide supports both Protobuf (default) and JSON encoding. To use JSON instead, add Logs_encoding json to the OUTPUT section.

Trace Correlation

When sending logs with trace context, LogTide automatically extracts and indexes trace_id and span_id fields. This enables:

Trace-to-logs correlation

Click on a trace ID to see all related logs

Distributed tracing

Follow requests across multiple services

Context filtering

Search logs by trace ID or span ID

import { trace, context } from '@opentelemetry/api';
import { logs, SeverityNumber } from '@opentelemetry/api-logs';

const tracer = trace.getTracer('my-tracer');
const logger = logs.getLogger('my-logger');

// Create a span
const span = tracer.startSpan('process-request');

// Log within the span context - trace_id is auto-propagated
context.with(trace.setSpan(context.active(), span), () => {
logger.emit({
  severityNumber: SeverityNumber.INFO,
  body: 'Processing user request',
  attributes: { 'request.id': 'req-123' },
});
});

span.end();

Troubleshooting

Response Codes

Code Meaning
200 Success
400 Invalid request format
401 Missing or invalid API key
429 Rate limit exceeded
500 Server error

Common Issues

Logs Not Appearing
  • Check that your API key is valid and has ingestion permissions
  • Verify you're sending to /v1/otlp/logs
  • Both application/json and application/x-protobuf are supported
  • Check rate limits (default: 200 req/min per API key)
Empty logs appearing
Ensure your log records have a body field with content. The body.stringValue is used as the message.
Service name showing as "unknown"
Set the service.name resource attribute in your SDK configuration.

Enable Debug Logging

Enable debug logging to troubleshoot ingestion issues:

# LogTide Backend - Enable OTLP debug logging
# Shows raw protobuf structure for debugging parsing issues
OTLP_DEBUG=true

# OpenTelemetry SDK - Node.js
export OTEL_LOG_LEVEL=debug

# OpenTelemetry SDK - Python
import logging
logging.basicConfig(level=logging.DEBUG)

When OTLP_DEBUG=true is set on the LogTide backend, it will log the raw protobuf structure of incoming log records, which helps diagnose parsing or field mapping issues.

Migration from Custom SDKs

If you're currently using LogTide's custom SDKs, you can migrate to OpenTelemetry for standardized instrumentation:

Custom SDK
import { LogTideClient } from '@logtide-dev/sdk-node';

const client = new LogTideClient({
apiKey: 'your-key'
});

client.info('api', 'User logged in', {
userId: '123'
});
OpenTelemetry
import { logs, SeverityNumber } from '@opentelemetry/api-logs';

const logger = logs.getLogger('my-logger');

logger.emit({
severityNumber: SeverityNumber.INFO,
body: 'User logged in',
attributes: { 'user.id': '123' },
});
Benefits of OpenTelemetry
  • Vendor-neutral: Switch backends without code changes
  • Auto-instrumentation: Automatic logging for popular frameworks
  • Trace correlation: Built-in distributed tracing support
  • Large ecosystem: Extensive integrations and community support