Trace every AI conversation.

Open-source TypeScript. Every message, tool call, and response — captured automatically and streamed to Slack, Postgres, or anywhere.

$ npm install breadcrumb-chat

See what your AI is actually doing.

📡

Every conversation, traced

User messages, assistant responses, tool calls, and errors. Captured automatically with one wrapper function.

💬

Real-time in Slack

Watch AI conversations appear in your team's Slack channel as they happen. No dashboard to check.

📦

Zero config, open source

npm install, add your Slack token, done. MIT licensed TypeScript. No hosted service, no vendor lock-in.

Three lines to full observability.

Breadcrumb wraps your AI SDK calls and streams events to any sink.

1

Create a breadcrumb instance

Configure where traces go — Slack, Postgres, or your own custom sink.

const bc = createBreadcrumb({ sinks: [slackSink({ channel: "#ai-traces" })] });
2

Wrap your AI call

One function wraps streamText or generateText. No changes to your prompts or tools.

const traced = wrapStreamText(streamText, trace);
3

Traces appear automatically

Every user message, tool call, and assistant response flows to your sinks in real time.

trace.end(); // → Slack, Postgres, anywhere

Full example

A complete API route with the Vercel AI SDK.

app.ts
import { createBreadcrumb } from "breadcrumb-chat";
import { slackSink } from "breadcrumb-chat/sinks/slack";
import { wrapStreamText } from "breadcrumb-chat/adapters/ai-sdk";
import { streamText } from "ai";
import { openai } from "@ai-sdk/openai";

const bc = createBreadcrumb({
  sinks: [slackSink({ token: process.env.SLACK_TOKEN, channel: "#ai-traces" })],
});

const trace = await bc.trace({ userId: "u_123" });
const traced = wrapStreamText(streamText, trace);

const result = await traced({
  model: openai("gpt-4o"),
  messages,
});

result.finishReason.then(() => trace.end());
return result.toDataStreamResponse();

Send traces anywhere.

Sinks are pluggable destinations. Use the built-in ones or write your own.

#

Slack

Real-time

Threaded messages in any channel. See conversations as they happen.

slackSink({ token, channel: "#ai-traces" })
🗄

PostgreSQL

Persistent

Structured storage for traces. Query conversations with SQL.

postgresSink({ client: db })
🧪

Memory

Development

In-memory store for development and testing. No setup required.

memorySink()
🧩

Build your own

Extensible

Implement three async methods to send traces to any destination.

{ onTraceStart, onEvent, onTraceEnd }

Everything your AI does.

Breadcrumb captures every event type in the conversation lifecycle.

👤 User Input The message your user sent
🤖 Assistant Response What the model replied
💭 Reasoning Chain-of-thought and reasoning tokens
🔧 Tool Call Function name and arguments
📎 Tool Result What the tool returned
🚨 Error Failures with context
🏷 Metadata Custom key-value pairs you attach

Get started in 60 seconds.

Install
$ npm install breadcrumb-chat
Configure a sink
import { createBreadcrumb } from "breadcrumb-chat";
import { slackSink } from "breadcrumb-chat/sinks/slack";

const bc = createBreadcrumb({
  sinks: [slackSink({ token: process.env.SLACK_TOKEN, channel: "#ai-traces" })],
});
Wrap your AI call
const trace = await bc.trace({ userId: "u_123" });
const traced = wrapStreamText(streamText, trace);
const result = await traced({ model: openai("gpt-4o"), messages });