RTVI events
Full catalog of events emitted by a Breeze Buddy Daily session, plus client → server messages like tts-speak.
Breeze Buddy streams the entire session — transport state, user speech, bot speech, LLM tokens, function calls, pipeline metrics — to the browser over the Pipecat RTVI protocol. The backend uses Pipecat’s RTVIObserver + RTVIProcessor, so every standard RTVI event reaches the client, and Breeze Buddy adds four custom server messages on top.
Enabling events
RTVI events are gated by a single environment flag:
ENABLE_BREEZE_BUDDY_DAILY_EVENTS=trueEvents dispatch automatically for every Daily session (both DAILY agent mode and DAILY_STREAM stream mode). Register listeners via the callbacks object on PipecatClient — no per-event opt-in is required.
import { PipecatClient } from '@pipecat-ai/client-js';
import { DailyTransport } from '@pipecat-ai/daily-transport';
//
const client = new PipecatClient({
transport: new DailyTransport(),
callbacks: { onBotConnected: () => console.log('bot connected') }
});
await client.initDevices();
await client.startBotAndConnect({
endpoint: `${baseUrl}/agent/voice/breeze-buddy/connect`,
headers: new Headers({ Authorization: `Bearer ${apiToken}` }),
requestData: { lead_id: leadId }
});Event binding styles
The examples below use the client.on(event, handler) emitter style for brevity. In production code you’ll usually register all callbacks via the callbacks: { onXxx: ... } object on the constructor (see the Web SDK setup guide). Both styles work and can be mixed.
Mode availability at a glance
Two execution modes serve a Daily session; not every event fires in both.
| Event family | Agent mode (DAILY / DAILY_TEST) | Stream mode (DAILY_STREAM) |
|---|---|---|
| Connection, transport | ✓ | ✓ |
| User speech + transcription | ✓ | ✓ |
Bot TTS (onBotTts*, onBotStartedSpeaking) | ✓ | ✓ |
Bot LLM (onBotLlm*) | ✓ | — (no LLM in stream mode) |
Function calls (onLLMFunctionCall*) | ✓ | — |
Custom server messages (bot-ready, conversation-start/end, pipeline-error) | ✓ | ✓ |
Connection & transport
| Event | When it fires | Payload |
|---|---|---|
onConnected | Client connects to the Daily room. | — |
onDisconnected | Transport tears down — clean disconnect or network drop. | — |
onTransportStateChanged | Transport moves through authenticating → connecting → connected → disconnected. | { state: TransportState } |
onError | Any unrecoverable RTVI-level error. | { type: string, message: string } |
onMessageError | Server rejected a client message. | { type: string, data?: any } |
Participants
| Event | When it fires | Payload |
|---|---|---|
onParticipantJoined | A participant (user or bot) joins the room. | { id, name, local } |
onParticipantLeft | A participant leaves the room. | { id, reason? } |
Bot lifecycle
| Event | When it fires | Payload |
|---|---|---|
onBotStarted | Backend bot subprocess launched. | — |
onBotConnected | Bot has joined the Daily room. | { id } |
onBotReady | Pipeline is fully assembled and ready to process audio. This is the “start talking” signal. | { version, about } |
onBotDisconnected | Bot left the room (call ended, idle timeout, or crash). | { id, reason? } |
What to gate on `onBotReady`
Wait for onBotReady before showing the “speak now” UI. Microphone capture starts on connect, but transcription won’t route to an active pipeline until ready.
User speech
| Event | When it fires | Payload |
|---|---|---|
onUserStartedSpeaking | VAD detects the user started speaking. | — |
onUserStoppedSpeaking | VAD detects the user stopped. | — |
onUserTranscript | STT emits interim (partial) or final transcription. Check data.final. | { text: string, final: boolean, timestamp: number, user_id: string } |
onUserMuteStarted | User muted their microphone client-side. | — |
onUserMuteStopped | User unmuted. | — |
let interim = '';
client.on('userTranscript', (d) => {
if (d.final) {
appendToTranscript('user', d.text);
interim = '';
} else {
interim = d.text;
renderInterim(interim);
}
});Bot speech (TTS)
| Event | When it fires | Payload |
|---|---|---|
onBotStartedSpeaking | Bot audio begins playing in the room. | — |
onBotStoppedSpeaking | Bot audio ended. | — |
onBotTtsStarted | TTS engine began synthesizing a new utterance. | — |
onBotTtsText | TTS text chunk — what the bot is actually saying. Emitted as text streams through TTS. | { text: string } |
onBotTtsStopped | TTS finished synthesizing this utterance. | — |
onBotOutput | Aggregated bot text for an entire utterance, with spoken flag. | { text: string, spoken: boolean } |
client.on('botTtsText', (d) => appendToStream('bot', d.text));
client.on('botStoppedSpeaking', () => finalizeStream('bot'));Bot LLM (agent mode only)
| Event | When it fires | Payload |
|---|---|---|
onBotLlmStarted | LLM request sent. | — |
onBotLlmText | LLM streamed a token. | { text: string } |
onBotLlmStopped | LLM finished this response. | — |
onBotLlmSearchResponse | LLM tool-call response (grounded search). | { response: any } |
Function calls (agent mode only)
| Event | When it fires | Payload |
|---|---|---|
onLLMFunctionCallStarted | LLM decided to call a template function. | { function_name: string } |
onLLMFunctionCallInProgress | Arguments being extracted. | { function_name: string, arguments: object } |
onLLMFunctionCallStopped | Function call complete; transition (if any) applied. | { function_name: string, arguments: object, transition_to: string \| null } |
client.on('llmFunctionCallStopped', (data) => {
if (data.function_name === 'appointment_confirmed') {
showConfirmation(data.arguments.date, data.arguments.time);
}
});Audio levels & metrics
| Event | When it fires | Payload |
|---|---|---|
onLocalAudioLevel | Local mic volume sample. | { level: number } (0–1) |
onRemoteAudioLevel | Remote audio volume sample. | { participant_id, level } |
onMetrics | Pipeline-level metrics after each turn. | { stt_latency_ms, llm_ttfb_ms, tts_ttfb_ms, total_latency_ms, llm_tokens? } |
Device management
Standard Daily device events reach the client unchanged: onAvailableMicsUpdated, onAvailableSpeakersUpdated, onAvailableCamsUpdated, onMicUpdated, onSpeakerUpdated, onCamUpdated, onDeviceError, onTrackStarted, onTrackStopped. Use them to build device pickers and mic-level visualizers; see the Daily SDK docs for payloads.
Custom server messages
Breeze Buddy sends four application-level messages on top of RTVI. They arrive as serverMessage events with a typed type field.
| Message type | When it fires | Payload |
|---|---|---|
bot-ready | Pipeline assembly finished and client-ready acknowledged. (Parallel to onBotReady; useful if you want to read extra metadata.) | { version?, about? } |
conversation-start | Client connected and the conversation context is active. | {} |
conversation-end | Session concluded. | { reason: "client_disconnected" \| "idle_timeout" } |
pipeline-error | A pipeline processor failed irrecoverably. | { processor: string, error: string } |
client.on('serverMessage', (msg) => {
switch (msg.type) {
case 'conversation-end':
toast(`Call ended: ${msg.data.reason}`);
break;
case 'pipeline-error':
reportError(msg.data.processor, msg.data.error);
break;
}
});Client-to-server messages
The client can push messages back to the bot via client.sendClientMessage(type, data). Today one type is wired end-to-end: tts-speak, available in DAILY_STREAM mode.
tts-speak — client-driven bot utterance
In stream mode (no LLM, no template flow), the client decides what the bot says. Send a tts-speak message and the backend queues a TTSSpeakFrame into the pipeline; it plays through the normal TTS → transport path and fires onBotTts* / onBotStartedSpeaking events.
// Stream mode only — requires execution_mode: "DAILY_STREAM" on the lead
await client.sendClientMessage('tts-speak', {
text: 'Hello! How can I help you today?'
});| Field | Type | Required | Notes |
|---|---|---|---|
type | string | Yes | Must be exactly "tts-speak". |
data.text | string | Yes | Utterance to speak. Empty or non-string values are silently ignored. |
Behaviour
- Mode gate: only handled when the lead’s
execution_modeisDAILY_STREAM. Full agent modes do not expose this path. - Queued, not interrupting: the utterance appends to the FIFO frame queue — if the bot is already speaking, the new text plays after the current utterance finishes. For barge-in style interruption, interrupt the bot via the user speaking (VAD will handle it).
- Max length: text over 2000 characters is silently truncated.
- Auth: no per-message auth — any client connected to the room can trigger
tts-speak. Treat room access as the security boundary. - Observability: each utterance fires the normal
onBotTtsStarted,onBotTtsText,onBotTtsStopped,onBotStartedSpeaking,onBotStoppedSpeaking, andonBotOutputevents — you can reflect spoken text in your UI from the same listeners you already have.
When to reach for stream mode
Use DAILY_STREAM when your app already decides what to say (e.g. a custom agent, a scripted demo, a human-in-the-loop console) and you just need Breeze Buddy to run STT + TTS + transcription capture. For LLM-driven conversations, use DAILY with a template.
Error handling
Pipeline and transport errors arrive on two channels — choose based on which you want to surface.
| Source event | What triggers it | Recommended action |
|---|---|---|
onError | RTVI-level error (usually transport). | Show retry UI; call client.disconnect() and reconnect. |
onMessageError | Server rejected a client message. | Log; no user-facing action unless you sent the message. |
serverMessage { type: 'pipeline-error' } | A processor inside the pipeline failed. | Report and reconnect. |
client.on('error', (e) => {
console.error('[rtvi]', e.type, e.message);
if (e.type === 'connection-failed') showRetry();
});
client.on('serverMessage', (m) => {
if (m.type === 'pipeline-error') {
reportError(m.data.processor, m.data.error);
}
});Patterns
type State = 'idle' | 'listening' | 'thinking' | 'speaking';
let state: State = 'idle';
//
client.on('userStartedSpeaking', () => (state = 'listening'));
client.on('userTranscript', (d) => { if (d.final) state = 'thinking'; });
client.on('botStartedSpeaking', () => (state = 'speaking'));
client.on('botStoppedSpeaking', () => (state = 'idle'));
//
client.on('metrics', (m) => trackLatency(m.total_latency_ms));Clean up listeners
Always call client.off(name, handler) (or client.disconnect(), which tears down all listeners) on component unmount. Leaks here turn into phantom updates on stale components.