Documentation Index
Fetch the complete documentation index at: https://docs.uselumina.io/docs/llms.txt
Use this file to discover all available pages before exploring further.
Install the Lumina SDK for Python applications.
Prerequisites
Python 3.9+
Verify your installation:
python --version # Should show 3.9 or higher
Install SDK
Install LLM Provider SDK
Install your LLM provider’s official SDK:
Quick Start
Create a simple trace:
import os
from lumina import init_lumina
import anthropic
# Initialize Lumina
lumina = init_lumina({
"endpoint": "http://localhost:9411/v1/traces",
"service_name": "my-app",
})
# Initialize LLM provider
client = anthropic.Anthropic(api_key=os.environ["ANTHROPIC_API_KEY"])
# Trace an LLM call
response = lumina.trace_llm(
lambda: client.messages.create(
model="claude-sonnet-4-5",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}],
),
name="chat-completion",
system="anthropic",
prompt="Hello!",
)
print("Trace sent! View at http://localhost:3000/traces")
Run it:
Check your dashboard at http://localhost:3000/traces to see the trace.
Framework Integration
FastAPI
# main.py
import os
from fastapi import FastAPI
from lumina import init_lumina
import openai
app = FastAPI()
client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
lumina = init_lumina({
"endpoint": os.environ.get("LUMINA_ENDPOINT", "http://localhost:9411/v1/traces"),
"service_name": "fastapi-app",
})
@app.post("/api/chat")
async def chat(message: str):
response = await lumina.trace_llm(
lambda: client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": message}],
),
name="chat-completion",
system="openai",
prompt=message,
)
return {"content": response.choices[0].message.content}
Flask
# app.py
import os
from flask import Flask, request, jsonify
from lumina import init_lumina
import openai
app = Flask(__name__)
client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
lumina = init_lumina({
"endpoint": os.environ.get("LUMINA_ENDPOINT", "http://localhost:9411/v1/traces"),
"service_name": "flask-app",
})
@app.route("/api/chat", methods=["POST"])
def chat():
message = request.json["message"]
response = lumina.trace_llm(
lambda: client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": message}],
),
name="chat-completion",
system="openai",
prompt=message,
)
return jsonify({"content": response.choices[0].message.content})
Django
# views.py
import os
import json
from django.http import JsonResponse
from django.views.decorators.http import require_POST
from lumina import init_lumina
import openai
client = openai.OpenAI(api_key=os.environ["OPENAI_API_KEY"])
lumina = init_lumina({
"endpoint": os.environ.get("LUMINA_ENDPOINT", "http://localhost:9411/v1/traces"),
"service_name": "django-app",
})
@require_POST
def chat(request):
message = json.loads(request.body)["message"]
response = lumina.trace_llm(
lambda: client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": message}],
),
name="chat-completion",
system="openai",
prompt=message,
)
return JsonResponse({"content": response.choices[0].message.content})
Environment Variables
Set up environment variables for configuration:
# .env
LUMINA_ENDPOINT=http://localhost:9411/v1/traces
LUMINA_SERVICE_NAME=my-app
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
Load in your application:
from dotenv import load_dotenv
import os
load_dotenv()
lumina = init_lumina({
"endpoint": os.environ["LUMINA_ENDPOINT"],
"service_name": os.environ["LUMINA_SERVICE_NAME"],
})
Install python-dotenv if needed: pip install python-dotenv
Verification
Verify the SDK is working:
1. Send a test trace:
# test_lumina.py
import asyncio
from lumina import init_lumina
lumina = init_lumina({
"endpoint": "http://localhost:9411/v1/traces",
"service_name": "test",
})
def run_test(span):
span.set_attribute("test", "value")
lumina.trace("test-trace", run_test)
asyncio.run(lumina.flush())
print("✓ Test trace sent")
2. Check dashboard:
Open http://localhost:3000/traces and verify the trace appears.
Troubleshooting
Module not found
Error: ModuleNotFoundError: No module named 'lumina'
Solution: Verify the SDK is installed in your active environment:
pip show lumina-sdk
pip install lumina-sdk
Connection refused
Error: Failed to export spans ... Connection refused
Solution: Verify Lumina is running:
curl http://localhost:9411/health
Start Lumina if not running:
cd Lumina/infra/docker
docker compose up -d
Traces not appearing
Traces are batched and exported in the background. Call flush() before your process exits to ensure all spans are sent:
import asyncio
asyncio.run(lumina.flush())
Next Steps
Configuration
Configure the SDK for your environment
Basic Usage
Learn basic tracing patterns
Advanced Usage
Multi-span tracing and async patterns
API Reference
Complete SDK API documentation