Choose a setup path:
Learn how to create your first voice agent for real-time conversational AI. This guide will walk you through logging in, creating a agent, and testing it in our playground.
You will need an OpenAI API key (and with small tweaks, you can use any other LLM provider).

Sign up, log in, and grab your keys

  1. Visit dash.layercode.com
  2. Sign up or log in using email and password, then verify your email.
  3. Create a new agent (you can pick any of the optional setups).
  4. Copy the agent ID for later.
  5. Click Connect your backend and copy the Webhook Secret and save it for later.
  6. In Account settings, copy your Layercode API Key and save it for later.

Choose your stack

Initialize a new Next.js project and install your dependencies. Pick your package manager:
npx create-next-app@latest my-app --yes
cd my-app
npm i @layercode/node-server-sdk @layercode/react-sdk ai @ai-sdk/openai
We use the Vercel AI SDK with the OpenAI provider as an example.
Create an env file called .env.local:
[.env.local]
NEXT_PUBLIC_LAYERCODE_agent_ID=
LAYERCODE_API_KEY=
LAYERCODE_WEBHOOK_SECRET=
OPENAI_API_KEY=
Fill in the agent ID, API key, and webhook secret you grabbed earlier. Also put in your OpenAI API key.

App Router

In your project root, create the route folders (App Router):
mkdir -p app/api/authorize app/api/agent
Create these files:
touch app/api/authorize/route.ts app/api/agent/route.ts
Inside app/api/authorize/route.ts paste:
export const dynamic = 'force-dynamic';
import { NextResponse } from 'next/server';

// Returns a client_session_key so the browser can connect to the Layercode agent
export const POST = async (request: Request) => {
  const endpoint = 'https://api.layercode.com/v1/agents/authorize_session';
  const apiKey = process.env.LAYERCODE_API_KEY;
  if (!apiKey) throw new Error('LAYERCODE_API_KEY is not set.');

  const requestBody = await request.json();
  if (!requestBody?.agent_id) throw new Error('Missing agent_id in request body.');

  const response = await fetch(endpoint, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${apiKey}`,
    },
    body: JSON.stringify(requestBody),
  });

  if (!response.ok) {
    const text = await response.text();
    return NextResponse.json({ error: text || response.statusText }, { status: response.status });
  }
  return NextResponse.json(await response.json());
};
Inside app/api/agent/route.ts paste:
export const dynamic = 'force-dynamic';

import { createOpenAI } from '@ai-sdk/openai';
import { streamText } from 'ai';
import { streamResponse, verifySignature } from '@layercode/node-server-sdk';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const SYSTEM_PROMPT =
  'You are a helpful conversation assistant. Keep responses concise and natural for TTS.';

// Handles Layercode webhook per turn
export const POST = async (request: Request) => {
  const requestBody = await request.json();
  const signature = request.headers.get('layercode-signature') || '';
  const secret = process.env.LAYERCODE_WEBHOOK_SECRET || '';
  const isValid = verifySignature({ payload: JSON.stringify(requestBody), signature, secret });
  if (!isValid) return new Response('Unauthorized', { status: 401 });

  const userText = requestBody.text || '';

  return streamResponse(requestBody, async ({ stream }) => {
    const { textStream } = streamText({
      model: openai('gpt-4o-mini'),
      system: SYSTEM_PROMPT,
      messages: [{ role: 'user', content: [{ type: 'text', text: userText }] }],
      onFinish: async () => stream.end(),
    });

    await stream.ttsTextStream(textStream);
  });
};

Frontend

Create a ui folder and the components:
mkdir -p app/ui
touch app/ui/AudioVisualization.tsx app/ui/ConnectionStatusIndicator.tsx app/ui/VoiceAgent.tsx app/ui/MicrophoneIcon.tsx
app/ui/AudioVisualization.tsx
export function AudioVisualization({ amplitude, height = 46 }: { amplitude: number; height?: number }) {
  const maxHeight = height;
  const minHeight = Math.floor(height / 6);
  const barWidth = Math.floor(minHeight);
  const multipliers = [0.2, 0.5, 1.0, 0.5, 0.2];
  const normalizedAmplitude = Math.min(Math.max(amplitude * 7, 0), 1);

  return (
    <div className="w-auto flex items-center gap-[2px]" style={{ height: `${height}px` }}>
      {multipliers.map((multiplier, index) => {
        const barHeight = minHeight + normalizedAmplitude * maxHeight * multiplier;
        return (
          <div
            key={index}
            className="flex flex-col items-center"
            style={{ height: `${barHeight}px`, width: `${barWidth}px` }}
          >
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all"
              style={{ width: '100%', height: `${barWidth}px`, borderTopLeftRadius: '9999px', borderTopRightRadius: '9999px' }}
            />
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all"
              style={{ width: '100%', height: `calc(100% - ${2 * barWidth}px)` }}
            />
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all"
              style={{ width: '100%', height: `${barWidth}px`, borderBottomLeftRadius: '9999px', borderBottomRightRadius: '9999px' }}
            />
          </div>
        );
      })}
    </div>
  );
}
app/ui/ConnectionStatusIndicator.tsx
export function ConnectionStatusIndicator({ status }: { status: string }) {
  return (
    <div className="justify-self-start flex items-center gap-2 bg-white dark:bg-gray-800 sm:px-3 p-1 rounded-full shadow-sm dark:shadow-gray-900/30">
      <div
        className={`w-3 h-3 rounded-full ${
          status === 'connected' ? 'bg-green-500' : status === 'connecting' ? 'bg-yellow-500' : 'bg-red-500'
        }`}
      />
      <span className="text-sm text-gray-700 dark:text-gray-300 hidden sm:block">
        {status === 'connected'
          ? 'Connected'
          : status === 'connecting'
          ? 'Connecting...'
          : status === 'error'
          ? 'Connection Error'
          : 'Disconnected'}
      </span>
    </div>
  );
}
app/ui/MicrophoneIcon.tsx
export const MicrophoneIcon = () => (
  <svg
    style={{ color: '#FFFFFF' }}
    xmlns="http://www.w3.org/2000/svg"
    width="20"
    height="20"
    viewBox="0 0 24 24"
    fill="none"
    stroke="currentColor"
    strokeWidth="2"
    strokeLinecap="round"
    strokeLinejoin="round"
  >
    <path d="M12 2a3 3 0 0 0-3 3v7a3 3 0 0 0 6 0V5a3 3 0 0 0-3-3Z" />
    <path d="M19 10v2a7 7 0 0 1-14 0v-2" />
    <line x1="12" x2="12" y1="19" y2="22" />
  </svg>
);

app/ui/VoiceAgent.tsx
'use client';

import { useLayercodeagent } from '@layercode/react-sdk';
import { AudioVisualization } from './AudioVisualization';
import { ConnectionStatusIndicator } from './ConnectionStatusIndicator';
import { MicrophoneIcon } from './MicrophoneIcon';

export default function VoiceAgent() {
  const { agentAudioAmplitude, status } = useLayercodeagent({
    agentId: process.env.NEXT_PUBLIC_LAYERCODE_AGENT_ID!,
    authorizeSessionEndpoint: '/api/authorize',
    onDataMessage: (data) => {
      console.log('Received data msg', data);
    },
  });

  return (
    <div className="w-96 h-96 border border-white rounded-lg flex flex-col gap-20 items-center justify-center">
      <h1 className="text-gray-800 text-xl font-bold">Voice Agent Demo</h1>
      <AudioVisualization amplitude={agentAudioAmplitude} height={75} />
      <div className="flex flex-col gap-4 items-center justify-center">
        <div className="h-12 px-4 rounded-full flex items-center gap-2 justify-center select-none bg-[#FF5B41] text-white">
          <MicrophoneIcon />
        </div>
        <ConnectionStatusIndicator status={status} />
      </div>
    </div>
  );
}
app/page.tsx
'use client';
import VoiceAgent from './ui/VoiceAgent';

export default function Home() {
  return (
    <div className="w-full min-h-[80vh] flex items-center justify-center">
      <VoiceAgent />
    </div>
  );
}
Now you have done all the setup and you can run your app locally with:Make a note of the port number.

Set up a local tunnel and save it in the Layercode dashboard

Now, you need to expose your webhook so that Layercode can reach your endpoint.We recommend ngrok. Here is more information on tunnelling. (link to tunnelling resource)First install ngrok: https://ngrok.com/downloads/Then run:
ngrok http YOUR_APPS_PORT_NUMBER
ngrok gives you a URL (e.g., https://8bbf4104a752.ngrok-free.app).Append /api/agent to it e.g. https://8bbf4104a752.ngrok-free.app/api/agentTake that URL and go to your Layerode agent dashboard at https://dash.layercode.com/Click Connect your backend, then paste that URL into Webhook URL and press Save.If you haven’t already, note down the webhook secret and put it into your .env.local as LAYERCODE_WEBHOOK_SECRET.Now, refresh your app and test it out. It should work—but if you get stuck, please email us!
Deploying to production? Update the Webhook URL to your production domain. See Deploying your app.

Next Steps

Congratulations! We recommend reading our guide on building voice agents as well as our tips for deploying applications with Layercode.