n8n
Complete observability for n8n workflow automation
Set up end-to-end observability for n8n workflow automation with Parseable using OpenTelemetry.
Overview
Integrate n8n with Parseable to:
- Workflow Tracing - Track every workflow and node execution
- Performance Metrics - Monitor durations, error rates, and token usage
- Structured Logging - Capture logs with Winston + OpenTelemetry
- AI Agent Debugging - Debug non-deterministic agentic workflows
- Unified Observability - Single OTLP pipeline for logs, traces, and metrics
Architecture
┌───────────┐ ┌───────────────────────┐ ┌──────────────────────┐
│ n8n │───▶│ OpenTelemetry (OTLP) │───▶│ Parseable (Datasets) │
│ │ │ - Traces │ │ - otel-traces │
│ Workflows │ │ - Metrics │ │ - otel-metrics │
│ Nodes │ │ - Logs (Winston) │ │ - otel-logs │
└───────────┘ └───────────────────────┘ └──────────────────────┘Prerequisites
- Docker and Docker Compose
- n8n instance
- Parseable instance
Quick Start
Clone the observability repository and start the stack:
git clone https://github.com/parseablehq/n8n-observability.git
cd n8n-observability
cp .env.example .env
# Edit .env with your passwords
docker-compose up -dAccess the applications:
- n8n:
http://localhost:5678(admin/admin) - Parseable:
http://localhost:8000(admin/admin)
Docker Compose Setup
Environment Variables
Create a .env file:
# Parseable credentials
PARSEABLE_USERNAME=admin
PARSEABLE_PASSWORD=admin
# n8n authentication
N8N_BASIC_AUTH_PASSWORD=admin
# Database password
POSTGRES_PASSWORD=admin
# Optional: Custom domain/protocol
N8N_HOST=localhost
N8N_PROTOCOL=http
WEBHOOK_URL=http://localhost:5678/Docker Compose Configuration
version: '3.8'
services:
parseable:
image: parseable/parseable:latest
ports:
- "8000:8000"
environment:
- P_USERNAME=${PARSEABLE_USERNAME}
- P_PASSWORD=${PARSEABLE_PASSWORD}
n8n:
build:
dockerfile: Dockerfile.working
environment:
- PARSEABLE_URL=http://parseable:8000
- OTEL_SERVICE_NAME=n8n-comprehensive
- N8N_WINSTON_LOGGING=true
ports:
- "5678:5678"
depends_on:
- parseable
- postgres
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_DB=n8n
- POSTGRES_USER=n8n
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}Verification
Check that telemetry is flowing:
# Check service status
docker-compose ps
# View n8n logs for OpenTelemetry initialization
docker logs n8n-comprehensive | grep "OpenTelemetry"
# Verify Parseable streams exist
curl -s http://localhost:8000/api/v1/logstream \
-H "Authorization: Basic $(echo -n 'admin:admin' | base64)"You should see two streams: otel-traces and otel-metrics.
Querying n8n Data
Workflow Execution Metrics
SELECT
metric_name,
workflow_name,
workflow_id,
data_point_value,
time_unix_nano
FROM "otel-metrics"
WHERE metric_name LIKE '%workflow%'
ORDER BY time_unix_nano DESC
LIMIT 20Node Performance Metrics
SELECT
metric_name,
node_type,
node_name,
workflow_name,
data_point_value,
time_unix_nano
FROM "otel-metrics"
WHERE metric_name LIKE '%node%'
AND node_type IS NOT NULL
ORDER BY time_unix_nano DESC
LIMIT 15HTTP Request Success Rate
SELECT
"http.response.status_code",
COUNT(*) as request_count,
COUNT(*) * 100.0 / SUM(COUNT(*)) OVER() as percentage
FROM "otel-metrics"
WHERE metric_name LIKE '%http%'
AND "http.response.status_code" IS NOT NULL
AND p_timestamp > NOW() - INTERVAL '24 hours'
GROUP BY "http.response.status_code"
ORDER BY request_count DESCSlowest HTTP Requests
SELECT
span_name,
"http.method",
"http.route",
EXTRACT(EPOCH FROM (span_end_time_unix_nano - span_start_time_unix_nano)) * 1000 as duration_ms,
"http.response.status_code",
span_status_description
FROM "otel-traces"
WHERE span_name LIKE 'GET %' OR span_name LIKE 'POST %'
ORDER BY duration_ms DESC
LIMIT 10Setting Up Alerts
HTTP Error Rate Alert
Monitor when n8n experiences high error rates:
- Dataset:
otel-metrics - Monitor:
http.status_codeby COUNT - Filter:
http.response.status_code >= 400 - Threshold: Trigger when error count > 900 in 5 minutes
- Evaluation: Every 5 minutes
High Response Time Alert
Get notified when API response times exceed thresholds:
- Dataset:
otel-metrics - Monitor:
data_point_value(for duration metrics) - Filter:
metric_name LIKE '%http%duration%' - Threshold: Trigger when average response time > 2000ms
- Group by:
http.routeto identify specific endpoints
Alert Delivery Options
Parseable supports multiple notification channels:
- Webhook - Send alerts to Slack, Discord, or custom endpoints
- Email - Direct email notifications
- PagerDuty - Integration for critical production alerts
Best Practices
- Layered Alerting - Set up multiple severity levels (warning, critical)
- Context-Rich Notifications - Include workflow IDs and execution details
- Alert Fatigue Prevention - Use appropriate thresholds to avoid noise
- Escalation Policies - Define clear escalation paths
- Regular Review - Periodically adjust alert thresholds based on patterns
What You Get
- Structured, correlated logs from n8n using Winston + OpenTelemetry
- Distributed traces for every workflow and node to pinpoint slow or failing steps
- Metrics for durations, error rates, and token usage
- Single OTLP pipeline into Parseable for unified storage and analysis
- Ready-to-use SQL queries, dashboards, and alerts
Resources
Next Steps
- Set up alerts for workflow failures
- Create dashboards for n8n monitoring
- Configure OpenTelemetry for other services
Was this page helpful?