Sessions
BuddyClient, startSession, joinRoom, execution modes, and session controls.
Everything about creating and controlling a voice session.
BuddyClient
Construct once per authenticated user. The client is long-lived — reuse it across multiple calls.
import { BuddyClient } from '@juspay/breeze-buddy-client-sdk';
const client = new BuddyClient({
auth: { token },
resellerId,
merchantId,
baseUrl: 'https://clairvoyance.breezelabs.app' // optional
}); ClientOptions
| Property | Type | Required | Description |
|---|---|---|---|
auth | AuthConfig | Yes | { token: string } — short-lived JWT issued by your backend |
resellerId | string | Yes | Must be one of the reseller IDs authorized in your JWT claims |
merchantId | string | Yes | Must be one of the merchant IDs authorized in your JWT claims |
baseUrl | string | No | API base URL. Defaults to https://clairvoyance.breezelabs.app |
Why reseller/merchant IDs?
reseller_ids and merchant_ids as authorization lists — which IDs your token is allowed to act as. A single token may authorize multiple combos, so the SDK requires you to pick one per client.client.startSession(options)
Creates a lead via the API, then auto-connects WebRTC.
const session = await client.startSession({
templateId: 'f47ac10b-58cc-4372-a567-0e02b2c3d479',
payload: { customer_name: 'John' },
executionMode: 'production',
on: {
connected: () => showCallUI(),
transcript: (e) => renderBubble(e)
}
}); StartSessionOptions
| Property | Type | Required | Description |
|---|---|---|---|
templateId | string | Yes | Template UUID |
payload | Record<string, unknown> | No | Template-specific payload |
executionMode | 'production' \| 'test' \| 'stream' | No | Defaults to 'production' |
requestId | string | No | Unique request ID for idempotency. Auto-generated if omitted |
on | Partial<SessionEventMap> | No | Handlers registered before connect, so nothing is missed |
joinRoom(options)
Skips BuddyClient entirely and joins a Daily room directly. Zero API calls — no auth, no reseller/merchant IDs needed.
import { joinRoom } from '@juspay/breeze-buddy-client-sdk';
const session = await joinRoom({
roomUrl: 'https://mydomain.daily.co/room-xyz',
token: 'eyJ...',
on: { connected: () => console.log('joined') }
}); JoinRoomOptions
| Property | Type | Required | Description |
|---|---|---|---|
roomUrl | string | Yes | Daily room URL |
token | string | Yes | Daily meeting token |
on | Partial<SessionEventMap> | No | Initial handlers |
Execution modes
| Mode | Wire | Pipeline | Use for |
|---|---|---|---|
'production' | DAILY | STT → LLM → TTS | Normal conversational flow (default) |
'test' | DAILY_TEST | STT → LLM → TTS (sandbox) | Sandbox testing with no telephony side effects |
'stream' | DAILY_STREAM | STT → TTS (no LLM) | Deterministic scripted output — compliance, IVR, handoff |
Stream mode is what makes session.assistantSpeak(text) bypass the LLM and speak text verbatim. See Making the assistant speak for details.
Session controls
| Method | Description |
|---|---|
getState() | Read-only snapshot of current state |
close() | End the call, release audio, remove listeners, clear transcripts |
mute() / unmute() | Mic on/off |
setMicEnabled(enabled) | Explicit set |
assistantSpeak(text) | Send text to TTS. Returns Promise<void> — see Speaking |
sendMessage(msgType, data?) | Low-level RTVI escape hatch for custom backend handlers |
on(event, handler) / off(…) | Subscribe / unsubscribe from any session event |
[Symbol.asyncDispose]() | Alias for close(). Enables await using on ES2024+ engines. |
Convenience helpers
Shortcuts over the lower-level events. Each returns an Unsubscribe function.
| Method | Description |
|---|---|
onUserTranscript(handler) | Only user transcripts. Wraps session.on('transcript', …). |
onAssistantTranscript(handler) | Only assistant transcripts. Wraps session.on('transcript', …). |
onToolCall(handler) | Only tool-call transcripts. Wraps session.on('transcript', …). |
onAssistantSpeaking(handler) | TTS pipeline lifecycle — start, chunk(text), end. Fires in every mode, including stream. |
onUserSpeaking(handler) | User VAD lifecycle — start, end. Fires on voice activity (no text). |
Mental model: Transcript = text stream (from LLM / STT). Speaking = audio-activity stream (VAD + TTS lifecycle).
See Transcripts for transcript details and Speaking for the onAssistantSpeaking shape.
SessionState — what getState() returns
type SessionState = {
status: ConnectionStatus;
isMicEnabled: boolean;
transcripts: TranscriptEntry[];
assistantAudioTrack: MediaStreamTrack | null;
userAudioTrack: MediaStreamTrack | null;
error: string | null;
}; ConnectionStatus: 'idle' | 'connecting' | 'connected' | 'disconnecting' | 'disconnected' | 'error'.
Drive your whole call UI from one event:
session.on('state-change', (status) => {
if (status === 'connected') showCallUI();
if (status === 'disconnected') showEndedScreen();
if (status === 'error') showErrorScreen();
});