Vector Integration
Forward logs from Datadog Vector to LogTide. Works with any Vector source — syslog, Docker, files, Kafka, and more.
Tested with Vector 0.53.0
This guide has been tested with Vector 0.53.0 on both Linux and Docker deployments.
Vector is a great alternative when omhttp isn't available
in your rsyslog installation (common on Debian 13+).
Overview
Vector is a high-performance observability data pipeline that can collect, transform, and route logs to LogTide. It supports 100+ sources and can replace Fluent Bit, Logstash, or Fluentd in your stack.
Architecture
Vector sits between your log sources and LogTide, transforming logs into the expected format:
LogTide provides two ingestion endpoints. Use /api/v1/ingest/single for Vector
as it handles automatic field normalization and doesn't require wrapping logs in a {"logs": [...]} object.
| Endpoint | Best For | Auto-normalize |
|---|---|---|
/api/v1/ingest/single | Vector, Fluent Bit, collectors | Yes — auto-generates time, normalizes level |
/api/v1/ingest | SDKs, custom integrations | Partial — accepts arrays, but strict on standard format |
Quick Start
Minimal Vector config to forward demo logs to LogTide. Replace <YOUR_API_KEY> with
your project API key and <LOGTIDE_URL> with your instance URL.
vector.yaml
sources:
demo:
type: demo_logs
format: syslog
count: 5
transforms:
format_logs:
type: remap
inputs:
- demo
source: |
.message = string!(.message)
.level = downcase(string(.severity) ?? "info")
.service = string(.appname) ?? string(.hostname) ?? "unknown"
del(.severity)
del(.appname)
del(.hostname)
del(.facility)
del(.procid)
del(.source_type)
del(.timestamp)
del(.version)
del(.msgid)
del(.host)
sinks:
logtide:
type: http
inputs:
- format_logs
uri: https://<LOGTIDE_URL>/api/v1/ingest/single
encoding:
codec: json
framing:
method: newline_delimited
request:
headers:
X-API-Key: "<YOUR_API_KEY>"
Content-Type: "application/x-ndjson" Test it
Run vector -c vector.yaml — you should see
"Healthcheck passed" and no errors. The demo source will send 5 test logs and exit.
Key configuration details
| Setting | Value | Why |
|---|---|---|
codec | json | Each event encoded as JSON. Note: ndjson is not a valid Vector codec. |
framing.method | newline_delimited | Sends multiple events as newline-separated JSON (NDJSON) in a single HTTP request. |
Content-Type | application/x-ndjson | Tells LogTide to parse the body as newline-delimited JSON. |
endpoint | /api/v1/ingest/single | Accepts individual log objects without {"logs":[...]} wrapper. Auto-generates time if missing. |
Syslog Forwarding
The most common use case: aggregate syslogs on a central rsyslog host, then forward
to LogTide via Vector. This is especially useful when omhttp is not available
in your rsyslog package (common on Debian 13+).
vector.yaml — Syslog to LogTide
sources:
rsyslog:
type: syslog
address: 0.0.0.0:514
mode: tcp
transforms:
format_logs:
type: remap
inputs:
- rsyslog
source: |
# Extract and normalize fields
.message = string!(.message)
.level = downcase(string(.severity) ?? "info")
.service = string(.appname) ?? string(.hostname) ?? "syslog"
# Preserve useful metadata
.metadata = {}
.metadata.hostname = string(.hostname) ?? null
.metadata.facility = string(.facility) ?? null
.metadata.procid = string(.procid) ?? null
# Remove raw syslog fields (already extracted above)
del(.severity)
del(.appname)
del(.hostname)
del(.facility)
del(.procid)
del(.source_type)
del(.timestamp)
del(.version)
del(.msgid)
del(.host)
sinks:
logtide:
type: http
inputs:
- format_logs
uri: https://<LOGTIDE_URL>/api/v1/ingest/single
encoding:
codec: json
framing:
method: newline_delimited
request:
headers:
X-API-Key: "<YOUR_API_KEY>"
Content-Type: "application/x-ndjson"
Common Pitfall: Input Name Mismatch
The inputs field in transforms and sinks must match
the exact name of your source. If your source is named rsyslog,
use inputs: [rsyslog] — not inputs: [syslog].
A mismatch causes Vector to immediately exit with "All sources have finished".
Configure rsyslog to forward to Vector
On your rsyslog aggregation host, forward all logs to Vector via TCP:
# /etc/rsyslog.d/99-forward-to-vector.conf
# Forward all logs to Vector on localhost:514 via TCP
*.* @@127.0.0.1:514
Use @@ for TCP (recommended) or @ for UDP.
Then restart rsyslog: sudo systemctl restart rsyslog
Docker Logs
Collect logs from Docker containers using Vector's docker_logs source:
sources:
docker:
type: docker_logs
docker_host: unix:///var/run/docker.sock
transforms:
format_docker:
type: remap
inputs:
- docker
source: |
.service = string(.container_name) ?? "unknown"
.level = downcase(string(.level) ?? "info")
.message = string!(.message)
.metadata = {}
.metadata.container_id = string(.container_id) ?? null
.metadata.container_name = string(.container_name) ?? null
.metadata.image = string(.image) ?? null
del(.container_id)
del(.container_name)
del(.container_created_at)
del(.image)
del(.label)
del(.source_type)
del(.stream)
del(.timestamp)
del(.host)
sinks:
logtide:
type: http
inputs:
- format_docker
uri: https://<LOGTIDE_URL>/api/v1/ingest/single
encoding:
codec: json
framing:
method: newline_delimited
request:
headers:
X-API-Key: "<YOUR_API_KEY>"
Content-Type: "application/x-ndjson"
File-based Logs
Tail log files and forward them to LogTide:
sources:
app_logs:
type: file
include:
- /var/log/myapp/*.log
read_from: beginning
transforms:
format_app:
type: remap
inputs:
- app_logs
source: |
.service = "myapp"
.level = "info"
.message = string!(.message)
# Parse JSON logs if applicable
parsed, err = parse_json(.message)
if err == null {
.level = downcase(string(parsed.level) ?? "info")
.service = string(parsed.service) ?? .service
.message = string(parsed.message) ?? string(parsed.msg) ?? .message
.metadata = parsed
}
del(.source_type)
del(.timestamp)
del(.host)
del(.file)
sinks:
logtide:
type: http
inputs:
- format_app
uri: https://<LOGTIDE_URL>/api/v1/ingest/single
encoding:
codec: json
framing:
method: newline_delimited
request:
headers:
X-API-Key: "<YOUR_API_KEY>"
Content-Type: "application/x-ndjson"
Data Mapping
LogTide's /api/v1/ingest/single endpoint auto-normalizes incoming fields.
Here's how Vector fields map to LogTide:
LogTide Field Required Auto-generated Notes message Yes No Falls back to log field if message is missing service Yes Yes Falls back to container_name, K8s metadata, or "unknown" level Yes Yes Auto-normalized from syslog priority, numeric (Pino), or string. Defaults to info time No Yes Auto-set to current time if missing. Accepts ISO 8601 or epoch seconds (date field) metadata No No JSON object with any extra fields (hostname, facility, K8s labels, etc.) trace_id No No For distributed tracing correlation
Level Normalization
LogTide accepts 5 log levels. Incoming levels are automatically mapped:
debug debug, trace, verbose info info, notice, 30 warn warn, warning, 40 error error, err, 50 critical critical, fatal, emerg, 60
Batch Configuration
Vector batches multiple events into a single HTTP request by default.
You can tune this for your throughput needs:
sinks:
logtide:
type: http
inputs:
- format_logs
uri: https://<LOGTIDE_URL>/api/v1/ingest/single
encoding:
codec: json
framing:
method: newline_delimited
batch:
max_bytes: 1049000 # ~1MB per request
max_events: 1000 # Up to 1000 logs per request
timeout_secs: 1 # Flush at least every second
request:
concurrency: 10 # Up to 10 parallel requests
rate_limit_num: 200 # Max 200 requests per second
headers:
X-API-Key: "<YOUR_API_KEY>"
Content-Type: "application/x-ndjson"
Self-hosted URL
For LogTide Cloud, use https://api.logtide.dev/api/v1/ingest/single.
For self-hosted instances, use http://<YOUR_HOST>:8080/api/v1/ingest/single.
Troubleshooting
"All sources have finished" — Vector exits immediately
The inputs field in your transform or sink doesn't match
your source name. Check for typos:
sources:
rsyslog: # ← Source named "rsyslog"
type: syslog
transforms:
format_logs:
inputs:
- rsyslog # ← Must match exactly (not "syslog" or "demo_syslog")
"Http status: 400 Bad Request"
The payload format doesn't match what the endpoint expects. Most common causes:
- • Using
/api/v1/ingest with raw events — this endpoint expects
{"logs": [...]}. Use /api/v1/ingest/single instead. - • Missing
message field — every log must have a message. - • Wrapping with
. = { "logs": [ . ] } in VRL — Vector's HTTP sink
batches events into a JSON array, turning your payload into
[{"logs":[...]}, {"logs":[...]}] which doesn't match the expected format.
Use /api/v1/ingest/single without wrapping.
"Http status: 404 Not Found"
Wrong URL path. Make sure you include the /api prefix:
- • Correct:
https://api.logtide.dev/api/v1/ingest/single - • Wrong:
https://api.logtide.dev/v1/ingest/single
"unknown variant `ndjson`"
Vector doesn't have an ndjson codec. Use json with
framing.method: newline_delimited to achieve the same result.
"Http status: 401 Unauthorized"
API key is missing or invalid. Verify:
- • The
X-API-Key header is set in request.headers - • The key starts with
lp_ - • The key hasn't been revoked in your project settings
- • You're using a
write or full API key (not read-only)
Debug with a file sink
Add a file sink alongside your LogTide sink to inspect the transformed data:
sinks:
# Debug: write transformed logs to a file
debug_file:
type: file
inputs:
- format_logs
path: /tmp/vector-debug.json
encoding:
codec: json
json:
pretty: true
# Production: send to LogTide
logtide:
type: http
inputs:
- format_logs
# ...