Skip to main content
Complete reference for @uselumina/sdk.

initLumina(config)

Initialize the Lumina client. Parameters:
ParameterTypeRequiredDescription
config.endpointstringYesOTLP endpoint URL
config.service_namestringYesService identifier
config.enabledbooleanNoEnable tracing (default: true)
config.resourceobjectNoResource attributes
Returns: Lumina instance Example:
const lumina = initLumina({
  endpoint: 'http://localhost:9411/v1/traces',
  service_name: 'my-service',
});

lumina.traceLLM(fn, options)

Trace an LLM call with automatic attribute extraction. Parameters:
ParameterTypeRequiredDescription
fn() => Promise<T>YesAsync function returning LLM response
options.namestringNoSpan name
options.systemstringNoProvider (openai, anthropic)
options.promptstringNoInput prompt
options.metadataobjectNoCustom attributes
Returns: Promise<T> Example:
await lumina.traceLLM(
  () => openai.chat.completions.create({ model: 'gpt-4', messages }),
  { name: 'chat', system: 'openai', prompt: userMessage }
);

lumina.trace(name, fn)

Create a parent span for hierarchical tracing. Parameters:
ParameterTypeRequiredDescription
namestringYesSpan name
fn(span: Span) => Promise<T>YesAsync function to trace
Returns: Promise<T> Example:
await lumina.trace('rag_pipeline', async (span) => {
  span.setAttribute('query', query);
  // Your code here
});

lumina.flush()

Flush all buffered spans immediately. Returns: Promise<void>

lumina.shutdown()

Flush and shut down the SDK. Returns: Promise<void>

Span API

span.setAttribute(key, value) Add attribute to span.
span.setAttribute('userId', 'user-123');
span.setAttribute('cost', 0.003);
span.addEvent(name, attributes?) Add event to span.
span.addEvent('processing_started');
span.addEvent('checkpoint', { progress: 50 });
span.setStatus(status) Set span status.
span.setStatus({ code: 'OK' });
span.setStatus({ code: 'ERROR', message: 'Failed to process' });