LogTide

Python SDK

Official Python SDK for LogTide with full type hints support, automatic batching, retry logic, and Flask/Django/FastAPI middleware.

Installation

pip install logtide-sdk

# or with poetry
poetry add logtide-sdk

Quick Start

from logtide_sdk import LogTideClient

client = LogTideClient(
  api_url="http://localhost:8080",
  api_key="lp_your_api_key_here"
)

# Send logs
client.info("api-gateway", "Server started", {"port": 3000})
client.error("database", "Connection failed", Exception("Timeout"))

# Graceful shutdown
import atexit
atexit.register(client.close)

Features

  • ✅ Automatic batching with configurable size and interval
  • ✅ Retry logic with exponential backoff
  • ✅ Circuit breaker pattern for fault tolerance
  • ✅ Max buffer size with drop policy to prevent memory leaks
  • ✅ Query API for searching and filtering logs
  • ✅ Live tail with Server-Sent Events (SSE)
  • ✅ Trace ID context for distributed tracing
  • ✅ Global metadata added to all logs
  • ✅ Structured error serialization
  • ✅ Internal metrics (logs sent, errors, latency)
  • ✅ Flask, Django & FastAPI middleware for auto-logging HTTP requests
  • ✅ Full Python 3.8+ support with type hints

Configuration

client = LogTideClient(
  # Required
  api_url="http://localhost:8080",
  api_key="lp_your_api_key",
  
  # Optional - Performance
  batch_size=100,              # Max logs per batch (default: 100)
  batch_interval=5.0,          # Flush interval in seconds (default: 5.0)
  max_buffer_size=10000,       # Max logs in buffer (default: 10000)
  
  # Optional - Reliability
  max_retries=3,               # Max retry attempts (default: 3)
  retry_delay=1.0,             # Initial retry delay in seconds (default: 1.0)
  circuit_breaker_threshold=5, # Failures before circuit opens (default: 5)
  circuit_breaker_timeout=60,  # Circuit reset timeout in seconds (default: 60)
  
  # Optional - Metadata
  global_metadata={            # Added to all logs
      "environment": "production",
      "version": "1.0.0"
  },
  
  # Optional - Debug
  debug=False                  # Enable debug logging (default: False)
)

Logging Methods

Basic Logging

# Log levels: debug, info, warn, error, critical
client.debug("service-name", "Debug message", {"detail": "value"})
client.info("api-gateway", "Request received", {"method": "GET", "path": "/users"})
client.warn("cache", "Cache miss", {"key": "user:123"})
client.error("database", "Query failed", {"query": "SELECT *"})
client.critical("system", "Out of memory", {"used": "95%"})

Context Manager for Trace IDs

# Scoped trace ID
with client.trace_context("550e8400-e29b-41d4-a716-446655440000"):
  client.info("api", "Processing request")
  client.info("db", "Query executed")
  # All logs in this block have the same trace_id

Middleware Integration

Flask Middleware

from flask import Flask
from logtide_sdk.middleware import FlaskLogTideMiddleware

app = Flask(__name__)

# Add LogTide middleware
app.wsgi_app = FlaskLogTideMiddleware(
  app.wsgi_app,
  client=client,
  include_headers=True,
  include_response_time=True
)

@app.route("/users")
def get_users():
  return {"users": []}

# Logs automatically sent for each request

Django Middleware

# settings.py
MIDDLEWARE = [
  # ... other middleware
  'logtide_sdk.middleware.DjangoLogTideMiddleware',
]

# Configure client globally
from logtide_sdk import LogTideClient

LOGTIDE_CLIENT = LogTideClient(
  api_url="http://localhost:8080",
  api_key="lp_your_api_key"
)

FastAPI Middleware

from fastapi import FastAPI
from logtide_sdk.middleware import FastAPILogTideMiddleware

app = FastAPI()

# Add LogTide middleware
app.add_middleware(
  FastAPILogTideMiddleware,
  client=client,
  include_headers=True,
  include_response_time=True
)

@app.get("/users")
async def get_users():
  return {"users": []}

# Logs automatically sent for each request

Query API

# Search logs
result = client.query(
  service="api-gateway",
  level="error",
  from_time="2025-01-15T00:00:00Z",
  to_time="2025-01-15T23:59:59Z",
  q="timeout",  # Full-text search
  limit=100
)

print(f"Found {result['total']} error logs")
for log in result['logs']:
  print(f"[{log['time']}] {log['message']}")

Best Practices

1. Always Close on Shutdown
Use atexit.register(client.close) or call client.close() in shutdown handlers to flush remaining logs.
2. Use Global Metadata
Add environment, version, and other common fields as global metadata instead of repeating them in every log.
3. Use Context Managers for Tracing
Leverage with client.trace_context() to automatically add trace IDs to related logs.
Esc

Type to search across all documentation pages