Skip to main content
Lumina automatically calculates costs for LLM calls.

Automatic Calculation

Costs are calculated based on:
  1. Model name
  2. Token counts
  3. Provider pricing
No configuration required for supported models.

Supported Providers

ProviderModelsAuto-Calculation
OpenAIGPT-4, GPT-3.5, Turbo
AnthropicClaude 3.x, 3.5, Sonnet 4.5

Cost Analytics

View costs in the dashboard:
  • Per service
  • Per model
  • Per user
  • Over time

Cost Alerts

Set up automatic alerts:
{
  service: "chat-api",
  costThreshold: 2.0,  // 200% increase
  window: "1h"
}

Manual Cost Specification

For unsupported models:
await lumina.traceLLM(
  () => customLLM.generate(prompt),
  {
    metadata: {
      cost_per_prompt_token: 0.000003,
      cost_per_completion_token: 0.000015,
    },
  }
);