This guide shows you how to implement the Layercode Webhook SSE API in a Next.js backend using the Node.js Backend SDK. You’ll learn how to set up a webhook endpoint that receives transcribed messages from the Layercode voice pipeline and streams the agent’s responses back to the frontend, to be turned into speech and spoken back to the user.

Example code: layercodedev/example-fullstack-nextjs

This backend example is part of a full-stack example that also includes a web voice agent React frontend. We recommend also reading the Next.js frontend guide to get the most out of this example.

Prerequisites

  • Node.js 18+
  • Next.js (App Router recommended)
  • A Layercode account and pipeline (sign up here)
  • (Optional) An API key for your LLM provider (we recommend Google Gemini)

Setup

Install dependencies:

npm install @layercode/node-server-sdk @ai-sdk/google

Edit your .env environment variables. You’ll need to add:

  • GOOGLE_GENERATIVE_AI_API_KEY - Your Google AI API key
  • LAYERCODE_API_KEY - Your Layercode API key found in the Layercode dashboard settings
  • LAYERCODE_WEBHOOK_SECRET - Your Layercode pipeline’s webhook secret, found in the Layercode dashboard (go to your pipeline, click Edit in the Your Backend Box and copy the webhook secret shown)
  • NEXT_PUBLIC_LAYERCODE_PIPELINE_ID - The Layercode pipeline ID for your voice agent. Find this ID in Layercode dashboard

Create Your Next.js API Route

Here’s a simplified example of the core functionality needed to implement the Layercode webhook endpoint. See the GitHub repo for the full example.

app/api/agent/route.ts
export const maxDuration = 300; // We set a generous Vercel max function duration to allow for long running agents
export const dynamic = 'force-dynamic';

import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { streamText, CoreMessage } from 'ai';
import { streamResponse, verifySignature } from '@layercode/node-server-sdk';

const google = createGoogleGenerativeAI({
  apiKey: process.env.GOOGLE_GENERATIVE_AI_API_KEY,
});

const SYSTEM_PROMPT =
  "You are a helpful conversation assistant. You should respond to the user's message in a conversational manner. Your output will be spoken by a TTS model. You should respond in a way that is easy for the TTS model to speak and sound natural.";

// POST request handler for Layercode incoming webhook, per turn of the conversation
export const POST = async (request: Request) => {
  const requestBody = await request.json();
  const signature = request.headers.get('layercode-signature') || '';
  const secret = process.env.LAYERCODE_WEBHOOK_SECRET || '';
  const payload = JSON.stringify(requestBody);
  const isValid = verifySignature({ payload, signature, secret });

  if (!isValid) {
    console.error('Invalid signature', signature, secret, payload);
    return new Response('Unauthorized', { status: 401 });
  }

  return streamResponse(requestBody, async ({ stream }) => {
    const { textStream } = streamText({
      model: google('gemini-2.0-flash-001'),
      system: SYSTEM_PROMPT,
      messages: [{ role: 'user', content: [{ type: 'text', text }] }],
      onFinish: async ({ response }) => {
        stream.end(); // We must call stream.end() here to tell Layercode that the assistant's response has finished
      },
    });
    // Here we return the textStream chunks as SSE messages to Layercode, to be spoken to the user
    await stream.ttsTextStream(textStream);
  });
};

How It Works

  • /api/agent webhook endpoint: Receives POST requests from Layercode with the user’s transcribed message, session, and turn info. The webhook request is verified as coming from Layercode.
  • LLM call: Calls Google Gemini Flash 2.0 with the system prompt and user’s new transcribed message.
  • SSE streaming: As soon as the LLM starts generating a response, the backend streams the output back as SSE messages to Layercode, which converts it to speech and delivers it to the frontend for playback in realtime.

See the GitHub repo for the full example which includes conversation history, welcome message and more.

Running Your Backend

Start your Next.js app:

npm run dev

Configure the Layercode Webhook endpoint

In the Layercode dashboard, go to your pipeline settings. Under Your Backend, click edit, and here you can set the URL of the webhook endpoint.

If running this example locally, setup a tunnel (we recommend cloudflared which is free for dev) to your localhost so the Layercode webhook can reach your backend. Follow our tunneling guide.

Test Your Voice Agent

There are two ways to test your voice agent:

  1. Use the Layercode playground tab, found in the pipeline in the Layercode dashboard.
  2. As this example is a full-stack example, you can visit http://localhost:3000 in your browser and speak to the voice agent in your browser using the React frontend included.