Documentation Index
Fetch the complete documentation index at: https://docs.uselumina.io/docs/llms.txt
Use this file to discover all available pages before exploring further.
Horizontal Scaling
Scale ingestion:
kubectl scale deployment lumina-ingestion --replicas=10 --namespace lumina
Scale workers:
kubectl scale deployment lumina-worker --replicas=20 --namespace lumina
Autoscaling
Configure HPA:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: lumina-ingestion-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: lumina-ingestion
minReplicas: 3
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
Database Scaling
Vertical scaling:
Increase PostgreSQL resources:
postgresql:
resources:
requests:
memory: "8Gi"
cpu: "4"
Read replicas:
Add PostgreSQL replicas for read queries.
Capacity Planning
Ingestion throughput:
- 1 pod: ~10K traces/sec
- 10 pods: ~100K traces/sec
Worker throughput:
- 1 worker: ~1K traces/sec processing
- 20 workers: ~20K traces/sec
Storage:
- 1M traces: ~10GB
- 10M traces: ~100GB