Skip to main content
layercode-react-sdk.

useLayercodeAgent Hook

The useLayercodeAgent hook provides a simple way to connect your React app to a Layercode agent, handling audio streaming, playback, and real-time communication.
import { useEffect } from "react";
import { useLayercodeAgent } from "@layercode/react-sdk";

// Connect to a Layercode agent
const {
  // Methods
  connect,
  disconnect,
  triggerUserTurnStarted,
  triggerUserTurnFinished,
  sendClientResponseText,
  sendClientResponseData,
  setAudioInput,

  // State
  status,
  audioInput,
  userAudioAmplitude,
  agentAudioAmplitude,
} = useLayercodeAgent({
  agentId: "your-agent-id",
  authorizeSessionEndpoint: "/api/authorize",
  conversationId: "optional-conversation-id", // optional
  metadata: { userId: "user-123" }, // optional
  onAudioInputChanged: (enabled) => {
    console.log("Microphone enabled?", enabled);
  },
  onAgentSpeakingChange: (isSpeaking) => console.log("Agent speaking:", isSpeaking),
  onUserSpeakingChange: (isSpeaking) => console.log("User speaking:", isSpeaking),
  onConnect: ({ conversationId, config }) => {
    console.log("Connected to agent", conversationId);
    console.log("Agent config", config);
  },
  onDisconnect: () => console.log("Disconnected from agent"),
  onError: (error) => console.error("Agent error:", error),
  onDataMessage: (data) => console.log("Received data:", data),
});

useEffect(() => {
  connect();

  return () => {
    disconnect();
  };
}, [connect, disconnect]);
Call connect() to start the session; call disconnect() to cleanly close it when the component unmounts or you no longer need the agent connection.

Hook Options

agentId
string
required
The ID of your Layercode agent.
authorizeSessionEndpoint
string
required
Your backend endpoint that authorizes a client session and returns a client_session_key and conversation_id.
conversationId
string
The conversation ID to resume a previous conversation (optional).
metadata
object
Any metadata included here will be passed along to your backend with all Layercode webhooks for this session.
audioInput
boolean
Whether the browser microphone should be initialized immediately. Defaults to true. Set to false to keep the SDK in text-only mode until you explicitly enable audio input.
audioOutput
boolean
Whether agent audio should start playing immediately. Defaults to true. Set to false to suppress speaker playback until you explicitly enable it (the agent connection still runs in the background).
enableAmplitudeMonitoring
boolean
Whether microphone and speaker amplitude monitoring should run. Defaults to true. Disable this when you start with audioInput: false to avoid unnecessary audio processing. When disabled, amplitude readings remain 0.
onAudioInputChanged
function
Callback fired whenever the audio input state toggles. Receives a boolean (true when the mic is active).
onAudioOutputChanged
function
Callback fired whenever the audio output state toggles. Receives a boolean (true when the agent audio is audible).
onConnect
function
Callback when the connection is established. Receives { conversationId: string | null; config?: AgentConfig }. Use config to inspect the effective agent configuration returned from authorizeSessionEndpoint.
onDisconnect
function
Callback when the connection is closed.
onError
function
Callback when an error occurs. Receives an Error object.
onDataMessage
function
Callback for custom data messages from the server (see response.data events from your backend).
onUserSpeakingChange
function
Callback when VAD detects that the user started or stopped speaking. Receives a boolean.
onAgentSpeakingChange
function
Callback when the agent starts or stops speaking. Receives a boolean.
onMuteStateChange
function
Callback when mute()/unmute() are invoked. Receives a boolean for the new muted state.

Return Values

The useLayercodeAgent hook returns an object with the following properties:

State

status
string
The connection status. One of "initializing", "disconnected", "connecting", "connected", or "error". The hook begins in "initializing" before the client reports a lifecycle status.
userAudioAmplitude
number
Real-time amplitude of the user’s microphone input (0-1). Useful for animating UI when the user is speaking.
agentAudioAmplitude
number
Real-time amplitude of the agent’s audio output (0-1). Useful for animating UI when the agent is speaking.
audioInput
boolean
Current audio input state. false means the SDK has not requested microphone access.
audioOutput
boolean
Current audio output state. false means agent audio playback is muted locally.
userSpeaking
boolean
Whether the user is currently detected as speaking by VAD.
agentSpeaking
boolean
Whether the agent is currently speaking (based on active audio playback).
isMuted
boolean
Whether the local microphone stream is muted.
conversationId
string
The conversation ID the hook is currently connected to, or null when none is active. Useful for resuming or persisting sessions.

Turn-taking (Push-to-Talk)

Layercode supports both automatic and push-to-talk turn-taking. For push-to-talk, use these methods to signal when the user starts and stops speaking: Automatic versus push-to-talk behavior is configured on the backend via AgentConfig.transcription.trigger and AgentConfig.transcription.can_interrupt. Push-to-talk skips the VAD model because you drive turn boundaries manually.
triggerUserTurnStarted
function
triggerUserTurnStarted(): void Signals that the user has started speaking (for push-to-talk mode). Interrupts any agent audio playback.
triggerUserTurnFinished
function
triggerUserTurnFinished(): void Signals that the user has finished speaking (for push-to-talk mode).

Audio Input Controls

setAudioInput
function
setAudioInput(next: boolean | ((prev: boolean) => boolean)): void Enables or disables microphone capture without recreating the client. Use this to defer the browser permission prompt until the user switches into voice mode.

Agent Audio Output Controls

setAudioOutput
function
setAudioOutput(next: boolean | ((prev: boolean) => boolean)): void Enables or disables local playback of agent audio without disconnecting from the session.
Need to hide agent audio temporarily (for example, when playing other media in your UI)? Toggle setAudioOutput(false) while the session stays active, and re-enable with setAudioOutput(true) once you want to hear the agent again. Combine this with the audioOutput state or the onAudioOutputChanged callback to drive your UI controls.

Audio / Mic Controls

mute
function
mute(): void Stops sending microphone audio to the server while keeping the connection active.
unmute
function
unmute(): void Resumes sending microphone audio after a local mute.

Text messages

Use this method when the user submits a chat-style message instead of speaking.
sendClientResponseText
function
sendClientResponseText(text: string): void Sends a client.response.text to the server and interrupts any agent audio playback. The server emits user.transcript and manages turn boundaries; the client does not send trigger.turn.end here.
sendClientResponseData
function
sendClientResponseData(payload: Record<string, any>): void Sends a JSON-serializable payload to your agent backend without affecting the current turn. The data surfaces as a data webhook event. See docs page: Send JSON data from the client.

Notes & Best Practices

  • The hook manages microphone access, audio streaming, and playback automatically.
  • Start text-first experiences with audioInput: false and call setAudioInput(true) when the user explicitly opts into voice. Pair this with enableAmplitudeMonitoring: false to skip microphone metering until voice is enabled.
  • The metadata option allows you to set custom data which is then passed to your backend webhooks for this session (useful for user/session tracking).
  • The conversationId can be used to resume a previous conversation, or omitted to start a new one. The hook will report null until it connects.

Authorizing Sessions

To connect a client (browser) to your Layercode voice agent, you must first authorize the session. The SDK will automatically send a POST request to the path (or url if your backend is on a different domain) passed in the authorizeSessionEndpoint option. In this endpoint, you will need to call the Layercode REST API to generate a client_session_key and conversation_id (if it’s a new conversation).
If your backend is on a different domain, set authorizeSessionEndpoint to the full URL (e.g., https://your-backend.com/api/authorize).
Why is this required? Your Layercode API key should never be exposed to the frontend. Instead, your backend acts as a secure proxy: it receives the frontend’s request, then calls the Layercode authorization API using your secret API key, and finally returns the client_session_key to the frontend. This also allows you to authenticate your user, and set any additional metadata that you want passed to your backend webhook. How it works:
  1. Frontend: The SDK automatically sends a POST request to your authorizeSessionEndpoint with a request body.
  2. Your Backend: Your backend receives this request, then makes a POST request to the Layercode REST API /v1/agents/web/authorize_session endpoint, including your LAYERCODE_API_KEY as a Bearer token in the headers.
  3. Layercode: Layercode responds with a client_session_key (and a conversation_id), which your backend returns to the frontend.
  4. Frontend: The SDK uses the client_session_key to establish a secure WebSocket connection to Layercode.
Example backend authorization endpoint code:
export const dynamic = "force-dynamic";
import { NextResponse } from "next/server";

export const POST = async (request: Request) => {
  // Here you could do any user authorization checks you need for your app
  const endpoint = "https://api.layercode.com/v1/agents/web/authorize_session";
  const apiKey = process.env.LAYERCODE_API_KEY;
  if (!apiKey) {
    throw new Error("LAYERCODE_API_KEY is not set.");
  }
  const requestBody = await request.json();
  if (!requestBody || !requestBody.agent_id) {
    throw new Error("Missing agent_id in request body.");
  }
  try {
    const response = await fetch(endpoint, {
      method: "POST",
      headers: {
        "Content-Type": "application/json",
        Authorization: `Bearer ${apiKey}`,
      },
      body: JSON.stringify(requestBody),
    });
    if (!response.ok) {
      const text = await response.text();
      throw new Error(text || response.statusText);
    }
    return NextResponse.json(await response.json());
  } catch (error: any) {
    console.log("Layercode authorize session response error:", error.message);
    return NextResponse.json({ error: error.message }, { status: 500 });
  }
};

Custom Authorization

useLayercodeAgent exposes the same authorizeSessionRequest option as the vanilla SDK. Provide this function to inject custom headers, cookies, or a different HTTP client when calling your backend.
import { useMemo } from "react";
import { useLayercodeAgent } from "@layercode/react-sdk";

export function AgentWidget() {
  const authorizeSessionRequest = useMemo(
    () => async ({ url, body }) => {
      const response = await fetch(url, {
        method: "POST",
        headers: {
          "Content-Type": "application/json",
          "X-Custom-Header": "my-header",
        },
        body: JSON.stringify(body),
      });

      if (!response.ok) {
        throw new Error(`Authorization failed: ${response.statusText}`);
      }

      return response;
    },
    [],
  );

  const { connect, disconnect, status } = useLayercodeAgent({
    agentId: "agent_123",
    authorizeSessionEndpoint: "https://example.com/authorize-session",
    authorizeSessionRequest,
  });

  // .. Render your voice agent here
}
If authorizeSessionRequest is not supplied, the hook defaults to a standard fetch call that POSTs the JSON body to authorizeSessionEndpoint using Content-Type: application/json.

Request payload

The SDK sends these fields in the authorization request body:
  • agent_id – ID of the agent to connect.
  • metadata – metadata supplied when creating the hook.
  • sdk_version – version string of the React SDK (for example, "2.2.1").
  • conversation_id – present only when reconnecting to an existing conversation.

Response payload

Your endpoint must respond with JSON containing these fields:
response
{
  "client_session_key": "cs_abc123",
  "conversation_id": "conv_456"
}

Troubleshooting

AudioWorklet InvalidStateError on first connect

If the browser complains that AudioWorklet does not have a valid AudioWorkletGlobalScope, the initial connect() probably ran before the user interacted with the page. Browsers block new AudioContext instances (and therefore audioWorklet.addModule()) until they detect a click, tap, or key press. To resolve it:
  • Trigger the first connect() from a user gesture—e.g., a “Start voice agent” button that awaits connect().
  • Keep disconnect() inside an effect cleanup so teardown still runs automatically.
  • Expect the issue mostly during rapid development reloads or in React Strict Mode, where components mount twice. In production, users typically gesture before launching the agent, so reconnects after the initial gesture are safe.
Once the worklet loads successfully, you can auto-reconnect during the same session without another gesture. The requirement only applies to the very first call.