Grafana + Loki Setup - Complete ✅
This document confirms the successful setup and integration of Grafana + Loki for local log aggregation.
✅ Setup Verification
Status: All components are operational and verified.
| Component | Status | Details |
|---|---|---|
| Loki Container | ✅ Running | Port 3100, filesystem storage at /loki |
| Grafana Container | ✅ Running | Port 3200, anonymous auth enabled |
| Loki Health | ✅ Verified | http://localhost:3100/ready returns “ready” |
| Grafana Datasource | ✅ Configured | Loki datasource pre-configured and set as default |
| Logger Integration | ✅ Working | Custom Loki transport sending logs successfully |
| Log Flow | ✅ Confirmed | Logs appearing in Loki and queryable via Grafana |
Architecture Overview
API Server (@leadmetrics/api)
│
├─► Console (JSON when Loki enabled)
├─► File (logs/api.log, disabled when Loki enabled)
└─► Custom Loki Transport (packages/logger/src/loki-stream.ts)
│
├─ Batches: 100 logs or 5 seconds
├─ Format: Converts to nanosecond timestamps
└─ HTTP POST to http://localhost:3100/loki/api/v1/push
▼
┌──────────┐
│ Loki │ Stores logs with labels {service, env}
│ :3100 │ Filesystem storage: docker volume loki_data
└────┬─────┘
│ LogQL Query API
▼
┌──────────┐
│ Grafana │ Web UI for querying and visualization
│ :3200 │ Datasource: http://loki:3100
└──────────┘
▼
Browser: http://localhost:3200/exploreQuick Start
1. Start Services
# Start Loki and Grafana
docker compose up -d loki grafana
# Verify services are running
docker ps | findstr "loki\|grafana"2. Start API with Loki Logging
cd apps/api
# Environment variables (or use .env file)
$env:LOKI_ENDPOINT="http://localhost:3100"
$env:LOG_LEVEL="info"
$env:LOGGER_SERVICE="api"
pnpm dev3. Generate Test Logs
# Generate some API requests
for ($i=1; $i -le 10; $i++) {
curl http://localhost:3003/health | Out-Null
Start-Sleep -Milliseconds 500
}4. View Logs in Grafana
- Open http://localhost:3200/explore in your browser
- Ensure “Loki” is selected as the datasource (top left)
- Enter query:
{service="api"} - Click “Run query” or press Shift+Enter
- Logs should appear within 5-10 seconds
Custom Loki Transport Implementation
Why Custom Transport?
The popular pino-loki npm package has compatibility issues with Loki 3.0.0, causing JSON unmarshaling errors:
Got error when trying to send log to loki
loghttp.PushRequest.Streams: []loghttp.LogProtoStream: unmarshalerDecoder:
Value looks like Number/Boolean/None, but can't find its endSolution: packages/logger/src/loki-stream.ts
We implemented a custom writable stream that:
- Receives raw JSON logs from Pino’s multistream
- Batches logs until 100 entries or 5 seconds elapsed
- Converts timestamps from ISO 8601 to nanoseconds since epoch
- Formats payload according to Loki’s push API spec:
{ "streams": [{ "stream": {"service": "api", "env": "development"}, "values": [["1650000000000000000", "{\"level\":30,...}"], ...] }] } - HTTP POST to
{LOKI_ENDPOINT}/loki/api/v1/push - Error handling with console logging for debugging
Key Features
- Automatic batching: Reduces HTTP overhead
- Timestamp precision: Loki requires nanosecond timestamps
- Label support: Tags logs with
serviceandenvfor filtering - Graceful flush: Ensures logs are sent when stream closes
- No external dependencies: Uses only Node.js
streamandfetch
Configuration Files
docker-compose.yml
Added Loki and Grafana services:
services:
loki:
image: grafana/loki:3.0.0
ports:
- "3100:3100"
volumes:
- ./loki-config.yml:/etc/loki/local-config.yaml
- loki_data:/loki
command: -config.file=/etc/loki/local-config.yaml
grafana:
image: grafana/grafana:10.4.0
ports:
- "3200:3000"
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
- GF_AUTH_ANONYMOUS_ORG_ROLE=Admin
volumes:
- ./grafana-datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml
volumes:
loki_data:loki-config.yml
Minimal configuration for local development:
auth_enabled: false
server:
http_listen_port: 3100
common:
ring:
instance_addr: 127.0.0.1
kvstore:
store: inmemory
replication_factor: 1
path_prefix: /loki
schema_config:
configs:
- from: 2020-10-24
store: tsdb
object_store: filesystem
schema: v13
index:
prefix: index_
period: 24h
storage_config:
filesystem:
directory: /loki/chunks
tsdb_shipper:
active_index_directory: /loki/tsdb-index
cache_location: /loki/tsdb-cache
limits_config:
max_query_length: 0h
max_query_lookback: 0grafana-datasources.yml
Pre-configures Loki datasource:
apiVersion: 1
datasources:
- name: Loki
type: loki
access: proxy
url: http://loki:3100
isDefault: true.env (apps/api)
Environment variables for local development:
# Observability
LOKI_ENDPOINT=http://localhost:3100
LOG_LEVEL=info
LOGGER_SERVICE=apiCode Changes
1. packages/logger/src/loki-stream.ts (NEW)
Custom Loki transport implementation - see file for full code.
2. packages/logger/src/logger.ts
import { createLokiStream } from "./loki-stream";
export function createLogger(cfg: LoggerConfig): Logger {
// ...
// Disable pretty printing when Loki is enabled (format conflict)
const pretty = cfg.lokiEndpoint ? false : (cfg.pretty ?? isDev);
// ...
// Disable file transport when Loki is enabled (simplifies multistream)
const fileTarget = cfg.lokiEndpoint ? null : /* ... */;
// Custom Loki transport
if (cfg.lokiEndpoint) {
streams.push({
level,
stream: createLokiStream({
host: cfg.lokiEndpoint,
labels: {
service: cfg.service,
env,
},
interval: 5000,
batchSize: 100,
}),
});
}
// ...
}3. apps/api/src/index.ts
import { createLogger } from "@leadmetrics/logger";
// Create custom logger with Loki support
const pinoLogger = createLogger({
service: process.env.LOGGER_SERVICE ?? "api",
lokiEndpoint: process.env.LOKI_ENDPOINT,
level: (process.env.LOG_LEVEL ?? "info") as any,
});
// Use custom logger instead of Fastify's default
const fastify = Fastify({
logger: pinoLogger,
// ... other options
});Testing
Manual Testing
# 1. Generate logs
for ($i=1; $i -le 5; $i++) {
curl http://localhost:3003/health | Out-Null
Start-Sleep -Milliseconds 600
}
# 2. Wait for batch flush
Start-Sleep -Seconds 10
# 3. Query Loki API directly
curl "http://localhost:3100/loki/api/v1/query_range?query={service=\"api\"}&limit=10" | ConvertFrom-Json
# 4. Check Grafana UI
# Open http://localhost:3200/explore
# Query: {service="api"}Verified Results
✅ SUCCESS! Logs are flowing to Loki!
✅ Found 1 log stream(s)
✅ First stream has 16 log entries
Sample log entry:
{
"level": 30,
"time": "2026-04-26T06:51:26.326Z",
"service": "api",
"env": "development",
"reqId": "req-6",
"msg": "request completed",
"responseTime": 2.647899866104126
}LogQL Query Examples
Basic Queries
# All logs from API service
{service="api"}
# Errors only (level 50 = error in Pino)
{service="api"} |= "\"level\":50"
# Specific request ID
{service="api"} |= "req-123"
# HTTP requests only
{service="api"} |= "incoming request"
# Completed requests with response time > 100ms
{service="api"} |= "request completed" |= "responseTime" | json | responseTime > 100Time-Based Queries
# Last 5 minutes
{service="api"} [5m]
# Last hour
{service="api"} [1h]
# Specific time range (use Grafana UI time picker)
{service="api"}Aggregations
# Count requests per minute
rate({service="api"} |= "incoming request" [1m])
# Average response time
avg_over_time(
{service="api"} |= "request completed"
| json
| responseTime != ""
| unwrap responseTime [5m]
)Troubleshooting
Loki Container Restarts
Symptom: Container exits with permission errors on /tmp/loki
Solution: Use volume mount paths at /loki instead of /tmp/loki
- Update
loki-config.ymlto use/lokifor all storage paths - Volume mapping:
loki_data:/lokiindocker-compose.yml
No Logs in Loki
Check API is using custom logger:
// apps/api/src/app.ts should have logger: false
const fastify = Fastify({ logger: false });
// apps/api/src/index.ts should have custom logger
const pinoLogger = createLogger({ lokiEndpoint: process.env.LOKI_ENDPOINT });
const fastify = Fastify({ logger: pinoLogger });Verify environment variables:
# In apps/api terminal
echo $env:LOKI_ENDPOINT # Should be http://localhost:3100
echo $env:LOG_LEVEL # Should be info or debugCheck for transport errors:
Look for console errors in API terminal:
- “Failed to send logs to Loki” - HTTP transport failing
- “Loki push failed (400)” - Malformed JSON payload
Query Loki directly:
# Check if any logs exist
curl "http://localhost:3100/loki/api/v1/labels"
# Query for API logs
curl "http://localhost:3100/loki/api/v1/query_range?query={service=\"api\"}&limit=10"Grafana Shows “No Data”
-
Verify datasource: Go to Configuration → Data Sources → Loki
- URL should be
http://loki:3100(container name, not localhost) - Click “Save & Test” - should show green checkmark
- URL should be
-
Check time range: Ensure Grafana’s time picker covers when logs were generated
- Default is “Last 15 minutes”
- Try “Last 1 hour” or “Last 6 hours”
-
Try broader query:
{service=~".+"}to see all services -
Verify Loki is reachable from Grafana:
docker exec grafana curl http://loki:3100/ready # Should return "ready"
Console Shows JSON Instead of Pretty Logs
This is expected behavior when LOKI_ENDPOINT is set.
Why? The custom Loki transport requires raw JSON input. Pretty-printed logs from pino-pretty cannot be parsed as JSON, causing the stream to fail.
Options:
- View formatted logs in Grafana instead
- Temporarily unset
LOKI_ENDPOINTfor local debugging - Add a separate pretty stdout stream (may cause format conflicts)
Performance Considerations
Batching
- Interval: 5 seconds (configurable)
- Batch size: 100 logs (configurable)
- Trade-off: Shorter intervals = more real-time, more HTTP overhead
Storage
- Filesystem: Docker volume
loki_datastores all log data - Retention: No automatic cleanup configured (production should set limits)
- Growth: Approximately 1-5 KB per log entry (JSON + metadata)
Resource Usage
- Loki: ~50-100 MB RAM idle, 200-500 MB under load
- Grafana: ~150-300 MB RAM
- Network: Minimal (<1 MB/min for typical API traffic)
Production Considerations
This setup is for local development only. For production:
- Enable authentication in both Loki and Grafana
- Configure retention policies in Loki (
limits_config) - Use object storage (S3, GCS) instead of filesystem
- Enable TLS for HTTP endpoints
- Set up alerting in Grafana for critical errors
- Scale Loki horizontally with microservices mode
- Configure log sampling for high-volume services
- Set up backups for Loki data and Grafana dashboards
References
- Grafana Loki Documentation
- Loki HTTP API
- LogQL Query Language
- Pino Logger
- setup-loki.ps1 - Quick setup script
Summary
✅ Loki 3.0.0 and Grafana 10.4.0 running in Docker
✅ Custom Loki transport implemented to work around pino-loki issues
✅ Logs flowing from API → Loki → Grafana successfully
✅ LogQL queries working in Grafana Explore interface
✅ Batching configured for optimal performance (5s / 100 logs)
✅ Complete documentation and troubleshooting guide provided
The observability stack is fully operational and ready for local development use.