Eclipse
Boilit
BoilitBoilitBeta
Boilerplates

Boilerplate

Next.js and AI SDK with Langfuse (AI Observability)

A full‑stack boilerplate showcasing AI observability with Langfuse on top of Next.js App Router. The chat UI is built with assistant‑ui and general components with shadcn/ui. AI SDK v5 orchestrates model calls to OpenAI. OpenTelemetry (OTel) is enabled end‑to‑end, and chat telemetry is batched and forwarded by a background worker to Langfuse for tracing, token/cost analytics, and prompt inspection.

Features

  • Next.js App Router for modern full‑stack development.
  • assistant‑ui for a polished chat experience; shadcn/ui for shared UI.
  • AI SDK v5 as the runtime for server‑side AI orchestration.
  • OpenAI as the LLM provider (configurable).
  • OpenTelemetry (OTel) enabled for traces/spans across request → model call → response.
  • Langfuse integration for:
    • Traces/spans of chat runs
    • Token and cost usage per request
    • Prompt/response capture and metadata
    • Latency metrics and run statuses
  • Background worker that processes queued telemetry and forwards it to Langfuse (decoupled from request path).
  • TypeScript end‑to‑end.

Architecture Overview

  1. UI: Chat built with assistant‑ui + shadcn/ui.
  2. API: App Route (/api/v1/chat) invokes AI SDK v5OpenAI.
  3. Telemetry: OTel spans created around request & model call; chat events are enqueued.
  4. Worker: Background worker consumes the queue and sends events/spans to Langfuse.
  5. Observability: Inspect traces, tokens, cost, prompts, and durations in Langfuse.

Use Case

Ideal for teams building AI chat experiences that require production‑grade observability: full tracing, token/cost analytics, and prompt auditing. This boilerplate provides a clean, scalable baseline to monitor and optimize quality, latency, and spend across models and features. 🚀

Boilerplate details

  • Last update


    4 days ago
  • Boilerplate age


    4 days ago
Login to download