Get Lumina running locally with Docker Compose.
Prerequisites
Required:
- Docker 20.10 or higher
- Docker Compose 2.0 or higher
- 4GB RAM minimum
- Ports available: 3000, 8081, 9411
Verify installation:
docker --version # Should show 20.10 or higher
docker compose version # Should show 2.0 or higher
Optional:
- Anthropic API key (for replay with Claude models)
- OpenAI API key (for replay with GPT models)
API keys are only required for the replay feature. Core functionality (tracing, cost tracking, alerting) works without API keys.
Installation
Step 1: Clone Repository
git clone https://github.com/use-lumina/Lumina.git
cd Lumina/infra/docker
Step 2: Start Services
First-time startup takes 2-3 minutes for image pulls and database initialization.
Add -d to run in detached mode (background). Omit -d to view logs in real-time.
Step 3: Verify Services
Check all services are running:
Expected output:
NAME STATUS
lumina-postgres Up (healthy)
lumina-redis Up (healthy)
lumina-nats Up (healthy)
lumina-ingestion Up (healthy)
lumina-api Up (healthy)
lumina-dashboard Up (healthy)
All services should show Up (healthy) status.
Step 4: Access Dashboard
Open your browser and navigate to:
You should see the Lumina dashboard with an empty traces page.
Send Your First Trace
Now let’s send a test trace to verify everything works.
Option 1: Using the SDK (Recommended)
Install SDK:
npm install @uselumina/sdk
Create test file:
Create test-trace.ts:
import { initLumina } from '@uselumina/sdk';
import Anthropic from '@anthropic-ai/sdk';
const anthropic = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
});
const lumina = initLumina({
endpoint: 'http://localhost:9411/v1/traces',
service_name: 'quickstart-test',
});
async function main() {
console.log('Sending test trace...');
const response = await lumina.traceLLM(
async () =>
anthropic.messages.create({
model: 'claude-sonnet-4-5',
max_tokens: 100,
messages: [
{ role: 'user', content: 'Say hello in one sentence.' },
],
}),
{
name: 'hello-claude',
system: 'anthropic',
prompt: 'Say hello in one sentence.',
}
);
console.log('✓ Trace sent successfully!');
console.log('→ View at: http://localhost:3000/traces');
}
main();
Run it:
Expected output:
Sending test trace...
✓ Trace sent successfully!
→ View at: http://localhost:3000/traces
Refresh the dashboard. You should see your trace with:
- Service name:
quickstart-test
- Endpoint:
hello-claude
- Model:
claude-sonnet-4-5
- Automatic cost calculation
- Token counts (prompt, completion)
- Latency measurement
Option 2: Using cURL
Send a trace directly via HTTP:
curl -X POST http://localhost:9411/v1/traces \
-H "Content-Type: application/json" \
-d '{
"resourceSpans": [{
"resource": {
"attributes": [{
"key": "service.name",
"value": { "stringValue": "curl-test" }
}]
},
"scopeSpans": [{
"spans": [{
"traceId": "00000000000000000000000000000001",
"spanId": "0000000000000001",
"name": "test-span",
"startTimeUnixNano": "'$(date +%s)000000000'",
"endTimeUnixNano": "'$(date +%s)500000000'",
"attributes": [
{ "key": "model", "value": { "stringValue": "claude-sonnet-4-5" } },
{ "key": "prompt_tokens", "value": { "intValue": "10" } },
{ "key": "completion_tokens", "value": { "intValue": "20" } },
{ "key": "cost_usd", "value": { "doubleValue": 0.0015 } }
]
}]
}]
}]
}'
Self-hosted Lumina runs without authentication by default. No API key required.
View Your Trace
- Open http://localhost:3000/traces
- You should see your test trace
- Click on the trace to view details:
- Full prompt and response
- Token breakdown
- Cost calculation
- Latency timeline
- Custom metadata
Next Steps
Optional Configuration
Add LLM API Keys
To use the replay feature, add API keys:
# Stop services
docker compose down
# Edit .env file
cat > .env <<EOF
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
EOF
# Restart with new config
docker compose --env-file .env up -d
Customize Ports
Edit infra/docker/.env:
DASHBOARD_PORT=3000 # Change to avoid conflicts
API_PORT=8081
INGESTION_PORT=9411
Restart services:
docker compose down
docker compose up -d
Set trace retention period:
# .env
TRACE_RETENTION_DAYS=7 # Delete traces older than 7 days
DAILY_TRACE_LIMIT=50000 # Max traces per day
Troubleshooting
Port Already in Use
Error: bind: address already in use
Solution: Change ports in .env or stop conflicting service:
# Find process using port
lsof -i :3000
# Kill process (replace PID)
kill -9 <PID>
Services Not Starting
Error: Container exits immediately
Solution 1: Check Docker resources
Docker Desktop → Settings → Resources:
- Memory: Set to at least 4GB
- Disk: Ensure 10GB+ available
Solution 2: View logs
docker compose logs <service-name>
# Examples:
docker compose logs ingestion
docker compose logs postgres
Database Connection Failed
Error: connection refused or could not connect to server
Solution: Wait for PostgreSQL to be fully ready:
# Check PostgreSQL logs
docker compose logs postgres | grep "ready to accept connections"
# Restart ingestion service
docker compose restart ingestion
Dashboard Shows “Failed to Fetch”
Error: Dashboard loads but shows error when fetching data
Solution 1: Verify API is running
curl http://localhost:8081/health
Expected response:
{"status":"ok","service":"lumina-api"}
Solution 2: Check API logs
Traces Not Appearing
Issue: Sent trace but not visible in dashboard
Solution 1: Check ingestion logs
docker compose logs ingestion | grep ERROR
Solution 2: Verify endpoint
Ensure SDK points to correct endpoint:
const lumina = initLumina({
endpoint: 'http://localhost:9411/v1/traces', // Correct
// NOT: http://localhost:8081 (that's the query API)
});
Solution 3: Check trace format
Lumina expects OTLP format. Verify your trace matches OpenTelemetry spec.
Advanced Options
Run in Development Mode
For active development with hot reload:
# Stop Docker services
docker compose down
# Start infrastructure only (Postgres, Redis, NATS)
docker compose up -d postgres redis nats
# Run services locally with hot reload
cd ../../services/ingestion && bun run dev
cd ../../services/api && bun run dev
cd ../../apps/dashboard && bun run dev
View Real-Time Logs
# All services
docker compose logs -f
# Specific service
docker compose logs -f ingestion
Reset Database
# Stop services and remove volumes (deletes all data)
docker compose down -v
# Restart fresh
docker compose up -d
This deletes all traces permanently. Backup data before running.
Stopping Lumina
# Stop services (keeps data)
docker compose down
# Stop and remove all data
docker compose down -v
Data is stored in Docker volumes:
postgres-data — All traces and metadata
redis-data — Cached semantic scores
nats-data — Message queue state
What’s Next?
You now have Lumina running locally with your first trace. Next steps:
- Instrument your application — Add tracing to your production app
- Learn multi-span tracing — Track complex workflows
- Configure alerts — Get notified of issues
- Deploy to production — Kubernetes deployment
Need Help?