Layercode delivers conversation updates to your backend through HTTPS webhooks. Each time a user joins, speaks, or finishes a session, the voice pipeline posts JSON to the webhook URL configured on your agent. In reply to this, your backend can stream text replies back with Server-Sent Events (SSE), and Layercode will use a text to speech model to return voice back to your user. We tell your backend - in text - what the user said. And your backend tells Layercode - in text - what to speak back to the user.

Receiving requests from Layercode

In order to receive and process messages from your users, you need a backend endpoint that Layercode can communicate with. For example, in Next.js it might look something like this:
export const dynamic = 'force-dynamic';
import { streamResponse, verifySignature } from '@layercode/node-server-sdk';

export const POST = async (request: Request) => {
  const requestBody = (await request.json()) as WebhookRequest;
    
// Authorization goes here! (explained below)

  const { text: userText } = requestBody;

    console.log("user said: ", userText)

    // This is where all your LLM stuff can go to generate your response
    const aiResponse = "thank you for your message" // this would be dynamic in your application
    await stream.ttsTextStream(aiResponse);
};
Note: authorization is below

Tell Layercode where your endpoint is

Now you have an endpoint to receive messages from Layercode, you need to tell Layercode where to send your events. Go to Layercode’s dashboard, create or use an existing agent. Go to manual setup and type in the API endpoint that Layercode should send requests to. Setting a webhook URL

Expose your local endpoint with a tunnel

If you’re developing locally, you will need to run a tunnel such as cloudflared or ngrok and paste the tunnel URL into the dashboard. Our tunnelling guide walks through the setup.

Verify incoming requests

You should make sure that only authorized requests are sent to this endpoint. To do this, we expose a secret that you can find in the same location you used above. You should save this secret with the other secrets in your backend and verify each incoming request to
export const dynamic = 'force-dynamic';
import { streamResponse, verifySignature } from '@layercode/node-server-sdk';

export const POST = async (request: Request) => {
  const requestBody = (await request.json()) as WebhookRequest;

  // Verify this webhook request is from Layercode
  const signature = request.headers.get('layercode-signature') || '';
  const secret = process.env.LAYERCODE_WEBHOOK_SECRET || '';
  const isValid = verifySignature({
    payload: JSON.stringify(requestBody),
    signature,
    secret
  });
  if (!isValid) return new Response('Invalid layercode-signature', { status: 401 });

  const { text: userText } = requestBody;

    console.log("user said: ", userText)

    // This is where all your LLM stuff can go to generate your response
    const aiResponse = "thank you for your message" // this would be dynamic in your application
    await stream.ttsTextStream(aiResponse);
};

Customize which events you receive

You can see details on the data that Layercode sends to this endpoint here You can also toggle the events you want delivered:
  • message – (required) Fired after speech-to-text transcription completes for the user’s turn.
  • session.start – Sent as soon as a session opens so you can greet the user proactively.
  • session.end – Delivered when a session closes, including timing metrics and the full transcript.
  • session.update – Sent asynchronously once a session recording finishes processing (requires session recording to be enabled for the org).
Webhook event types

Respond to webhook events

It’s great to receive messages from users but of course you want to reply too. We can use a method on Layercode’s stream object to reply await stream.ttsTextStream("this is my reply");