Lumina Quickstart Guide
Get Lumina running locally in 5 minutes with Docker Compose.
What is Lumina?
Lumina is an open-source, OpenTelemetry-native observability platform for AI systems that provides:
- ✅ Real-time trace ingestion - Track every LLM call
- ✅ Cost & quality monitoring - Get alerted on spikes and drops
- ✅ Replay testing - Re-run production traces safely
- ✅ Semantic diff - Understand response changes
- ✅ All features included - Free forever, self-hosted
Self-Hosted Limits
The free self-hosted tier includes:
- 50,000 traces per day - Resets daily at midnight UTC
- 7-day retention - Traces older than 7 days are automatically deleted
- All features - Alerts, replay testing, semantic scoring, and more
For unlimited traces and longer retention, consider our managed cloud offering.
Prerequisites
Before you start, ensure you have:
- Docker & Docker Compose installed (Get Docker)
- 4GB RAM minimum
- Ports available: 3000, 5432, 6379, 4222, 8080, 8081, 8082
Check Docker is installed:
docker --version
docker-compose --version
Optional: API Keys for Replay Feature
To use the replay feature with real LLM calls, you’ll need API keys:
- Anthropic API key - For Claude models (Get from console.anthropic.com)
- OpenAI API key - For GPT models (Get from platform.openai.com)
Note: These API keys are only required for the replay feature. All other features (trace ingestion, alerts, cost monitoring) work without API keys.
Installation
Step 1: Clone the Repository
git clone https://github.com/use-lumina/Lumina.git
cd Lumina
Step 2: Configure Environment
# Copy the example environment file
cp .env.docker.example .env.docker
# Edit configuration
nano .env.docker # or use your preferred editor
Required configuration:
# Generate with: openssl rand -base64 32
JWT_SECRET=your-generated-secret-here
To generate a JWT secret:
openssl rand -base64 32
Optional: LLM API Keys (for Replay Feature)
Add these to enable the replay feature with real LLM calls:
# For Claude models (Anthropic)
ANTHROPIC_API_KEY=sk-ant-api03-your-key-here
# For GPT models (OpenAI)
OPENAI_API_KEY=sk-your-key-here
Without these keys, the replay feature will run in simulation mode (for testing without API costs).
Authentication Mode:
# Self-hosted (default) - No user authentication required
AUTH_REQUIRED=false
# Managed cloud - User authentication required
AUTH_REQUIRED=true
Note: Self-hosted defaults to
AUTH_REQUIRED=false, meaning no user authentication is required. All traces usecustomerId='default'.
Step 3: Start Lumina
cd infra/docker
docker-compose --env-file ../../.env.docker up -d
This will:
- Pull required Docker images (PostgreSQL, Redis, NATS)
- Build Lumina services (ingestion, API, replay, dashboard)
- Run database migrations automatically
- Start all services in the background
First-time setup takes 2-5 minutes depending on your internet speed.
Step 4: Verify Services
Check all services are running:
docker-compose ps
You should see all services with status Up (healthy):
NAME STATUS
lumina-postgres Up (healthy)
lumina-redis Up (healthy)
lumina-nats Up (healthy)
lumina-ingestion Up (healthy)
lumina-api Up (healthy)
lumina-replay Up (healthy)
lumina-dashboard Up (healthy)
Check service logs:
docker-compose logs -f
Look for:
ingestion_1 | ✅ Database initialized successfully
ingestion_1 | ✅ NATS initialized successfully
ingestion_1 | ✅ Redis cache initialized successfully
dashboard_1 | ✓ Ready in 3.2s
Step 5: Access the Dashboard
Open your browser and navigate to:
http://localhost:3000
You should see the Lumina dashboard! 🎉
Send Your First Trace
Now let’s send a test trace to see Lumina in action.
Option 1: Using the SDK (Recommended)
Install the SDK:
npm install @lumina/sdk
# or
bun add @lumina/sdk
Send a trace:
import { Lumina } from '@lumina/sdk';
// Initialize Lumina client
const lumina = new Lumina({
apiKey: 'test-key',
endpoint: 'http://localhost:8080/v1/traces',
environment: 'live',
// Note: For self-hosted, API key is optional (auth disabled by default)
});
// Track an LLM call
await lumina.traceLLM({
provider: 'openai',
model: 'gpt-4',
prompt: 'What is the capital of France?',
response: 'The capital of France is Paris.',
promptTokens: 10,
completionTokens: 8,
totalTokens: 18,
latencyMs: 1234,
costUsd: 0.0018,
metadata: {
userId: 'user-123',
sessionId: 'session-456',
},
});
console.log('✅ Trace sent to Lumina!');
Option 2: Using cURL
curl -X POST http://localhost:8080/v1/traces \
-H "Content-Type: application/json" \
-d '{
"trace_id": "trace-001",
"span_id": "span-001",
"timestamp": "'$(date -u +"%Y-%m-%dT%H:%M:%SZ")'",
"service_name": "quickstart-test",
"endpoint": "/api/chat",
"provider": "openai",
"model": "gpt-4",
"prompt": "What is the capital of France?",
"response": "The capital of France is Paris.",
"prompt_tokens": 10,
"completion_tokens": 8,
"total_tokens": 18,
"latency_ms": 1234,
"cost_usd": 0.0018,
"status": "success",
"environment": "live"
}'
Note: For self-hosted, authentication is disabled by default. No API key needed!
Step 6: View Your Trace
- Go to http://localhost:3000
- Click on Traces in the sidebar
- You should see your test trace appear!
Next Steps
1. Instrument Your Application
See our integration guides:
2. Set Up Alerts
Cost spikes and quality drops are automatically detected! View them at:
http://localhost:3000/alerts
3. Try the Replay Feature
Replay lets you re-run production traces to test changes:
// Capture a baseline
await lumina.createReplaySet({
name: 'Production baseline',
description: 'Captured before prompt change',
sampleSize: 100,
});
// After making changes, replay the traces
await lumina.replayTraces({
replaySetId: 'replay-set-id',
// Lumina automatically compares old vs new responses
});
View replay results at: http://localhost:3000/replay
4. Explore the API
Key endpoints:
GET /traces- List tracesGET /traces/:id- Get trace detailsGET /alerts- List alertsGET /cost- Cost analyticsPOST /replay- Create replay sets
Common Issues
Port Already in Use
If you see “port is already allocated”:
# Check what's using the port
lsof -i :3000 # or :5432, :8080, etc.
# Stop the conflicting service or change Lumina's ports in .env.docker
Services Not Starting
Check Docker resources:
- Docker Desktop → Settings → Resources
- Memory: Set to at least 4GB
- Disk: Ensure 10GB+ available
Database Connection Errors
Wait for PostgreSQL to be fully ready:
docker-compose logs postgres | grep "ready to accept connections"
If migrations fail:
# Restart the ingestion service
docker-compose restart ingestion
# Check logs
docker-compose logs ingestion
Dashboard Shows “Failed to Fetch”
- Check API is running:
curl http://localhost:8081/health
# Should return: {"status":"ok","service":"lumina-api"}
- Verify
NEXT_PUBLIC_API_URLin .env.docker:
NEXT_PUBLIC_API_URL=http://localhost:8081
Anthropic API Errors
If you see “Anthropic API key not set”:
- Check your .env.docker has the API key
- Restart services:
docker-compose down
docker-compose up -d
Stopping Lumina
# Stop all services (keeps data)
docker-compose down
# Stop and remove all data
docker-compose down -v
Data Persistence
Your data is stored in Docker volumes:
- postgres-data: All traces, alerts, baselines
- redis-data: Cached semantic scores
- nats-data: Message queue state
Backup Your Data
# Create a backup
docker run --rm \
-v docker_postgres-data:/data \
-v $(pwd):/backup \
alpine tar czf /backup/lumina-backup-$(date +%Y%m%d).tar.gz -C /data .
What’s Next?
- 📖 Architecture Overview - Understand how Lumina works
- 🔌 Integration Guides - Connect your LLM applications
- 🚨 Alert Configuration - Configure cost and quality alerts
- 🔁 Replay Guide - Test changes safely
- ❓ FAQ - Common questions answered
- 🔧 Troubleshooting - Fix common issues
Need Help?
Free Forever • All Features Included
Self-hosted Lumina includes all features with 50k traces/day and 7-day retention for $0. Need more? Upgrade to our managed cloud for unlimited traces and retention. Check out our pricing page.