Choose a setup path:
Learn how to create your first voice agent for real-time conversational AI. This guide will walk you through logging in, creating a agent, and testing it in our playground.
You will need an OpenAI API key (and with small tweaks, you can use any other LLM provider).

Sign up, log in, and grab your keys

  1. Visit dash.layercode.com and log in or sign up if you haven’t already.
  2. Create a new agent or use the agent that is auto-created for you.
  3. Copy the agent ID for later.
  4. Click Connect your backend and copy the Webhook Secret and save it for later.
  5. In Account settings, copy your Layercode API Key and save it for later.

Choose your stack

Initialize a new Next.js project and install your dependencies. Pick your package manager:
npx create-next-app@latest my-app --yes
cd my-app
npm i @layercode/node-server-sdk @layercode/react-sdk ai @ai-sdk/openai
We use the Vercel AI SDK with the OpenAI provider as an example.
Create an env file called .env.local:
.env.local
NEXT_PUBLIC_LAYERCODE_AGENT_ID=
LAYERCODE_API_KEY=
LAYERCODE_WEBHOOK_SECRET=
OPENAI_API_KEY=
Fill in the agent ID, API key, and webhook secret you grabbed earlier. Also put in your OpenAI API key.

App Router

Inside /app, create an api folder as well two subfolders authorize and agent:
mkdir -p app/api/authorize app/api/agent
Then create the files for he routes.
touch app/api/authorize/route.ts app/api/agent/route.ts
Inside app/api/authorize/route.ts paste:
app/api/authorize/route.ts
export const dynamic = 'force-dynamic';
import { NextResponse } from 'next/server';

// Returns a client_session_key so the browser can connect to the Layercode agent
export const POST = async (request: Request) => {
  const endpoint = 'https://api.layercode.com/v1/agents/authorize_session';
  const apiKey = process.env.LAYERCODE_API_KEY;
  if (!apiKey) throw new Error('LAYERCODE_API_KEY is not set.');

  const requestBody = await request.json();
  if (!requestBody?.agent_id) throw new Error('Missing agent_id in request body.');

  const response = await fetch(endpoint, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${apiKey}`,
    },
    body: JSON.stringify(requestBody),
  });

  if (!response.ok) {
    const text = await response.text();
    return NextResponse.json({ error: text || response.statusText }, { status: response.status });
  }
  return NextResponse.json(await response.json());
};
Inside app/api/agent/route.ts paste:
app/api/agent/route.ts
export const dynamic = 'force-dynamic';

import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { streamResponse, verifySignature } from '@layercode/node-server-sdk';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const SYSTEM_PROMPT =
  'You are a helpful conversation assistant. Keep responses concise and natural for TTS.';

// Handles Layercode webhook per turn
export const POST = async (request: Request) => {
  const requestBody = await request.json();
  const signature = request.headers.get('layercode-signature') || '';
  const secret = process.env.LAYERCODE_WEBHOOK_SECRET || '';
  const isValid = verifySignature({ payload: JSON.stringify(requestBody), signature, secret });
  if (!isValid) return new Response('Unauthorized', { status: 401 });

  const userText = requestBody.text || '';

  return streamResponse(requestBody, async ({ stream }) => {
    const { textStream } = streamText({
      model: openai('gpt-4o-mini'),
      system: SYSTEM_PROMPT,
      messages: [{ role: 'user', content: [{ type: 'text', text: userText }] }],
      onFinish: async () => stream.end(),
    });

    await stream.ttsTextStream(textStream);
  });
};

Frontend

Disable React Strict Mode for Development: React Strict Mode renders components twice in development, which causes the Layercode voice agent hook to initialize twice. This can create duplicate sessions (you may hear the agent speak twice). Temporarily disable Strict Mode in next.config.js while developing.
next.config.js
/** @type {import('next').NextConfig} */
const nextConfig = {
  reactStrictMode: false,
}

module.exports = nextConfig
Create a ui folder with the following components:
mkdir -p app/ui
touch app/ui/AudioVisualization.tsx app/ui/ConnectionStatusIndicator.tsx app/ui/VoiceAgent.tsx app/ui/MicrophoneIcon.tsx
A component for visualising the audio.
app/ui/AudioVisualization.tsx
export function AudioVisualization({ amplitude, height = 46 }: { amplitude: number; height?: number }) {
  const maxHeight = height;
  const minHeight = Math.floor(height / 6);
  const barWidth = Math.floor(minHeight);
  const multipliers = [0.2, 0.5, 1.0, 0.5, 0.2];
  const normalizedAmplitude = Math.min(Math.max(amplitude * 7, 0), 1);

  return (
    <div className="w-auto flex items-center gap-[2px]" style={{ height: `${height}px` }}>
      {multipliers.map((multiplier, index) => {
        const barHeight = minHeight + normalizedAmplitude * maxHeight * multiplier;
        return (
          <div
            key={index}
            className="flex flex-col items-center"
            style={{ height: `${barHeight}px`, width: `${barWidth}px` }}
          >
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all"
              style={{ width: '100%', height: `${barWidth}px`, borderTopLeftRadius: '9999px', borderTopRightRadius: '9999px' }}
            />
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all"
              style={{ width: '100%', height: `calc(100% - ${2 * barWidth}px)` }}
            />
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all"
              style={{ width: '100%', height: `${barWidth}px`, borderBottomLeftRadius: '9999px', borderBottomRightRadius: '9999px' }}
            />
          </div>
        );
      })}
    </div>
  );
}
A component for telling us if we’re connected or not
app/ui/ConnectionStatusIndicator.tsx
export function ConnectionStatusIndicator({ status }: { status: string }) {
  return (
    <div className="justify-self-start flex items-center gap-2 bg-white dark:bg-gray-800 sm:px-3 p-1 rounded-full shadow-sm dark:shadow-gray-900/30">
      <div
        className={`w-3 h-3 rounded-full ${
          status === 'connected' ? 'bg-green-500' : status === 'connecting' ? 'bg-yellow-500' : 'bg-red-500'
        }`}
      />
      <span className="text-sm text-gray-700 dark:text-gray-300 hidden sm:block">
        {status === 'connected'
          ? 'Connected'
          : status === 'connecting'
          ? 'Connecting...'
          : status === 'error'
          ? 'Connection Error'
          : 'Disconnected'}
      </span>
    </div>
  );
}
A microphone icon
app/ui/MicrophoneIcon.tsx
export const MicrophoneIcon = () => (
  <svg
    style={{ color: '#FFFFFF' }}
    xmlns="http://www.w3.org/2000/svg"
    width="20"
    height="20"
    viewBox="0 0 24 24"
    fill="none"
    stroke="currentColor"
    strokeWidth="2"
    strokeLinecap="round"
    strokeLinejoin="round"
  >
    <path d="M12 2a3 3 0 0 0-3 3v7a3 3 0 0 0 6 0V5a3 3 0 0 0-3-3Z" />
    <path d="M19 10v2a7 7 0 0 1-14 0v-2" />
    <line x1="12" x2="12" y1="19" y2="22" />
  </svg>
);

And a main file where most of the logic is
app/ui/VoiceAgent.tsx
'use client';

import { useLayercodeagent } from '@layercode/react-sdk';
import { AudioVisualization } from './AudioVisualization';
import { ConnectionStatusIndicator } from './ConnectionStatusIndicator';
import { MicrophoneIcon } from './MicrophoneIcon';

export default function VoiceAgent() {
  const { agentAudioAmplitude, status } = useLayercodeagent({
    agentId: process.env.NEXT_PUBLIC_LAYERCODE_AGENT_ID!,
    authorizeSessionEndpoint: '/api/authorize',
    onDataMessage: (data) => {
      console.log('Received data msg', data);
    },
  });

  return (
    <div className="w-96 h-96 border border-white rounded-lg flex flex-col gap-20 items-center justify-center">
      <h1 className="text-gray-800 text-xl font-bold">Voice Agent Demo</h1>
      <AudioVisualization amplitude={agentAudioAmplitude} height={75} />
      <div className="flex flex-col gap-4 items-center justify-center">
        <div className="h-12 px-4 rounded-full flex items-center gap-2 justify-center select-none bg-[#FF5B41] text-white">
          <MicrophoneIcon />
        </div>
        <ConnectionStatusIndicator status={status} />
      </div>
    </div>
  );
}
Finally, we also need to update the page component to import VoiceAgent
app/page.tsx
'use client';
import dynamic from 'next/dynamic';

const VoiceAgent = dynamic(() => import('./ui/VoiceAgent'), { ssr: false });

export default function Home() {
  return (
    <div className="w-full min-h-[80vh] flex items-center justify-center">
      <VoiceAgent />
    </div>
  );
}
Now you have done all the setup and you can run your app locally with:
npm run dev
Right now you’re app will be displaying an error. That’s because we haven’t told Layercode where to send the voice it transcribes.But, make a note of the port number.

Set up a local tunnel and save it in the Layercode dashboard

Now, you need to tunnel your endpoint to the web so that Layercode can reach it.We recommend ngrok. Here is more information on tunnelling.First install ngrok: https://ngrok.com/downloads/Then run:
ngrok http YOUR_APPS_PORT_NUMBER
ngrok gives you a URL (e.g., https://8bbf4104a752.ngrok-free.app).Append /api/agent to it e.g. https://8bbf4104a752.ngrok-free.app/api/agentTake that URL and go to your Layerode agent dashboard at https://dash.layercode.com/Click Connect your backend, then paste that URL into Webhook URL and press Connect.If you haven’t already, note down the webhook secret and put it into your .env.local as LAYERCODE_WEBHOOK_SECRET.Now, refresh your app and test it out. It should work—but if you get stuck, please email us!
Deploying to production? Update the Webhook URL to your production domain. See Deploying your app.

Next Steps

Congratulations! Now we recommend playing around with the sytem prompt in app/api/agent/route.ts
app/api/agent/route.ts
  const SYSTEM_PROMPT = 'You are a helpful conversation assistant. Keep responses concise and natural for TTS.';