Skip to main content
Track individual LLM calls with automatic attribute extraction.

Basic Usage

const response = await lumina.traceLLM(
  () =>
    openai.chat.completions.create({
      model: 'gpt-4',
      messages: [{ role: 'user', content: 'Hello!' }],
    }),
  {
    name: 'chat-completion',
    system: 'openai',
    prompt: 'Hello!',
  }
);

Automatic Attributes

Lumina automatically extracts:
  • Model name
  • Token counts (prompt, completion)
  • Latency (milliseconds)
  • Cost (USD)
  • Status (success/error)
  • Response text

Custom Metadata

Add custom attributes:
await lumina.traceLLM(
  () => llm.generate(prompt),
  {
    name: 'chat',
    metadata: {
      userId: 'user-123',
      sessionId: 'session-456',
      feature: 'customer-support',
    },
  }
);

Next Steps

Multi-Span Tracing

Track complex workflows with hierarchical spans