Skip to main content
Install the Lumina SDK for TypeScript and JavaScript applications.

Prerequisites

Node.js 18+ or Bun 1.0+ Verify your installation:
node --version  # Should show v18 or higher
or
bun --version   # Should show 1.0 or higher

Install SDK

npm install @uselumina/sdk

Install LLM Provider SDK

Install your LLM provider’s official SDK:
npm install openai

Quick Start

Create a simple trace:
import { initLumina } from '@uselumina/sdk';
import Anthropic from '@anthropic-ai/sdk';

// Initialize Lumina
const lumina = initLumina({
  endpoint: 'http://localhost:9411/v1/traces',
  service_name: 'my-app',
});

// Initialize LLM provider
const anthropic = new Anthropic({
  apiKey: process.env.ANTHROPIC_API_KEY,
});

// Trace an LLM call
async function main() {
  const response = await lumina.traceLLM(
    async () =>
      anthropic.messages.create({
        model: 'claude-sonnet-4-5',
        max_tokens: 1024,
        messages: [{ role: 'user', content: 'Hello!' }],
      }),
    {
      name: 'chat-completion',
      system: 'anthropic',
      prompt: 'Hello!',
    }
  );

  console.log('Trace sent! View at http://localhost:3000/traces');
}

main();
Run it:
npx tsx app.ts
Check your dashboard at http://localhost:3000/traces to see the trace.

Framework Integration

Next.js

App Router (Recommended):
// app/api/chat/route.ts
import { initLumina } from '@uselumina/sdk';
import { OpenAI } from 'openai';

const openai = new OpenAI();
const lumina = initLumina({
  endpoint: process.env.LUMINA_ENDPOINT,
  service_name: 'nextjs-app',
});

export async function POST(req: Request) {
  const { message } = await req.json();

  const response = await lumina.traceLLM(
    () =>
      openai.chat.completions.create({
        model: 'gpt-4',
        messages: [{ role: 'user', content: message }],
      }),
    {
      name: 'chat-completion',
      system: 'openai',
      prompt: message,
    }
  );

  return Response.json(response);
}
Pages Router:
// pages/api/chat.ts
import type { NextApiRequest, NextApiResponse } from 'next';
import { initLumina } from '@uselumina/sdk';
import { OpenAI } from 'openai';

const openai = new OpenAI();
const lumina = initLumina({
  endpoint: process.env.LUMINA_ENDPOINT,
  service_name: 'nextjs-app',
});

export default async function handler(
  req: NextApiRequest,
  res: NextApiResponse
) {
  const { message } = req.body;

  const response = await lumina.traceLLM(
    () =>
      openai.chat.completions.create({
        model: 'gpt-4',
        messages: [{ role: 'user', content: message }],
      }),
    { name: 'chat', system: 'openai', prompt: message }
  );

  res.json(response);
}

Express

import express from 'express';
import { initLumina } from '@uselumina/sdk';
import { OpenAI } from 'openai';

const app = express();
const openai = new OpenAI();
const lumina = initLumina({
  endpoint: process.env.LUMINA_ENDPOINT,
  service_name: 'express-app',
});

app.post('/api/chat', async (req, res) => {
  const { message } = req.body;

  const response = await lumina.traceLLM(
    () =>
      openai.chat.completions.create({
        model: 'gpt-4',
        messages: [{ role: 'user', content: message }],
      }),
    { name: 'chat', system: 'openai', prompt: message }
  );

  res.json(response);
});

app.listen(3000);

Fastify

import Fastify from 'fastify';
import { initLumina } from '@uselumina/sdk';
import { OpenAI } from 'openai';

const fastify = Fastify();
const openai = new OpenAI();
const lumina = initLumina({
  endpoint: process.env.LUMINA_ENDPOINT,
  service_name: 'fastify-app',
});

fastify.post('/api/chat', async (request, reply) => {
  const { message } = request.body as { message: string };

  const response = await lumina.traceLLM(
    () =>
      openai.chat.completions.create({
        model: 'gpt-4',
        messages: [{ role: 'user', content: message }],
      }),
    { name: 'chat', system: 'openai', prompt: message }
  );

  return response;
});

fastify.listen({ port: 3000 });

Environment Variables

Set up environment variables for configuration:
# .env
LUMINA_ENDPOINT=http://localhost:9411/v1/traces
SERVICE_NAME=my-app
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
Load in your application:
import { config } from 'dotenv';
config();

const lumina = initLumina({
  endpoint: process.env.LUMINA_ENDPOINT,
  service_name: process.env.SERVICE_NAME,
});

TypeScript Support

The SDK is written in TypeScript with full type definitions. tsconfig.json:
{
  "compilerOptions": {
    "target": "ES2020",
    "module": "ESNext",
    "moduleResolution": "node",
    "esModuleInterop": true,
    "strict": true
  }
}
Type imports:
import type { Lumina, TraceLLMOptions, Span } from '@uselumina/sdk';

const options: TraceLLMOptions = {
  name: 'chat-completion',
  system: 'openai',
  prompt: 'Hello',
};

Verification

Verify the SDK is working: 1. Send a test trace:
npx tsx -e "
import { initLumina } from '@uselumina/sdk';

const lumina = initLumina({
  endpoint: 'http://localhost:9411/v1/traces',
  service_name: 'test',
});

await lumina.trace('test-trace', async (span) => {
  span.setAttribute('test', 'value');
});

await lumina.flush();
console.log('✓ Test trace sent');
"
2. Check dashboard: Open http://localhost:3000/traces and verify the trace appears.

Troubleshooting

Module not found

Error: Cannot find module '@uselumina/sdk' Solution: Reinstall dependencies:
rm -rf node_modules package-lock.json
npm install

Connection refused

Error: ECONNREFUSED localhost:9411 Solution: Verify Lumina is running:
curl http://localhost:9411/health
Start Lumina if not running:
cd Lumina/infra/docker
docker compose up -d

TypeScript errors

Error: TS2307: Cannot find module '@uselumina/sdk' Solution: Add types to tsconfig.json:
{
  "compilerOptions": {
    "types": ["@uselumina/sdk"]
  }
}

Next Steps