This guide shows you how to implement the Layercode Webhook SSE API in a Hono backend (Cloudflare Workers compatible). You’ll learn how to set up a webhook endpoint that receives transcribed messages from the Layercode voice pipeline and streams the agent’s responses back to the frontend, to be turned into speech and spoken back to the user. You can test your backend using the Layercode dashboard playground or by following the Build a Web Voice Agent guide.

Example code: layercodedev/example-backend-hono

Prerequisites

  • Node.js 18+
  • Hono (Cloudflare Workers compatible)
  • A Layercode account and pipeline (sign up here)
  • (Optional) An API key for your LLM provider (e.g., Google Gemini)

Setup

npm create hono@latest my-app
cd my-app
npm install hono @layercode/node-server-sdk

Edit your .dev.vars (automatically read by wrangler) environment variables. You’ll need to add:

  • GOOGLE_GENERATIVE_AI_API_KEY - Your Google AI API key
  • LAYERCODE_WEBHOOK_SECRET - Your Layercode pipeline’s webhook secret, found in the Layercode dashboard (go to your pipeline, click Edit in the Your Backend Box and copy the webhook secret shown)

Create Your Hono Handler

Here’s a simplified example of the core functionality needed to implement the Layercode webhook endpoint:

import { Context } from "hono";
import { verifySignature, streamResponse } from '@layercode/node-server-sdk';
import { env } from 'cloudflare:workers';

const sessionMessages = {};

const WELCOME_MESSAGE = "Welcome to Layercode. How can I help you today?";

export const onRequestPost = async (c: Context) => {
  const secret = env.LAYERCODE_WEBHOOK_SECRET;
  if (!secret) {
    return c.json({ error: 'LAYERCODE_WEBHOOK_SECRET is not set' }, 500);
  }

  const rawBody = await c.req.text();
  const signature = c.req.header('layercode-signature') || '';
  const isValid = verifySignature({ payload: rawBody, signature, secret });
  if (!isValid) {
    console.error('Invalid signature', signature, secret, rawBody);
    return c.json({ error: 'Invalid signature' }, 401);
  }

  const json = await c.req.json();
  const { text, type, session_id } = json;

  let messages = sessionMessages[session_id] || [];
  messages.push({ role: "user", content: [{ type: "text", text }] });

  // Handle session start
  if (type === "session.start") {
    return streamResponse(json, async ({ stream }) => {
      stream.tts(WELCOME_MESSAGE);
      messages.push({
        role: "assistant",
        content: [{ type: "text", text: WELCOME_MESSAGE }],
      });
      stream.end();
    });
  }

  // Handle regular messages
  return streamResponse(json, async ({ stream }) => {
    try {
      // Your agent logic here
      const response = "This is a sample response from your agent";
      stream.tts(response);

      // Save the response to session history
      messages.push({
        role: "assistant",
        content: [{ type: "text", text: response }],
      });
      sessionMessages[session_id] = messages;
    } catch (err) {
      console.error("Error:", err);
    } finally {
      stream.end();
    }
  });
};

3. How It Works

  • /agent endpoint: Receives POST requests from Layercode with the user’s transcribed message, session, and turn info. The webhook request is verified as coming from Layercode.
  • Session management: Keeps track of conversation history per session (in-memory for demo; use a store for production).
  • LLM call: Calls Google Gemini Flash 2.0 with the system prompt, message history and user’s new transcribed message.
  • SSE streaming: As soon as the LLM starts generating a response, the backend streams the output back as SSE messages to Layercode, which converts it to speech and delivers it to the frontend for playback in realtime.
  • /authorize endpoint: Your Layercode API key should never be exposed to the frontend. Instead, your backend acts as a secure proxy: it receives the frontend’s request then, calls the Layercode authorization API using your secret API key, and finally returns the client_session_key (and optionally a session_id) to the frontend. This key is required for the frontend to establish a secure WebSocket connection to Layercode.

4. Running Your Backend

Start your Hono app (Cloudflare Workers):

npx wrangler dev

Configure the Layercode Webhook endpoint

In the Layercode dashboard, go to your pipeline settings. Under Your Backend, click edit, and here you can set the URL of the webhook endpoint.

If running this example locally, setup a tunnel (we recommend cloudflared which is free for dev) to your localhost so the Layercode webhook can reach your backend. Follow our tunnelling guide.

Test Your Voice Agent

There are two ways to test your voice agent:

  1. Use the Layercode playground tab, found in the pipeline in the Layercode dashboard.
  2. Follow one of our Frontend Guides to build a Web Voice Agent that uses this backend.