Choose a setup path:
Learn how to create your first voice agent for real-time conversational AI. This guide will walk you through logging in, creating a agent, and testing it in our playground.
You will need an OpenAI API key (and with small tweaks, you can use any other LLM provider).

Sign up, log in, and grab your keys

  1. Visit dash.layercode.com and log in or sign up if you haven’t already.
  2. Create a new agent or use the agent that is auto-created for you.
  3. Copy the agent ID for later.
  4. Click Connect your backend and copy the Webhook Secret and save it for later.
  5. In Account settings, copy your Layercode API Key and save it for later.

Choose your stack

Looking for a full example? See the complete repo: layercodedev/example-fullstack.
Initialize a new Next.js project and install your dependencies. Pick your package manager:
npx create-next-app@latest my-app --yes
cd my-app
npm i @layercode/node-server-sdk @layercode/react-sdk ai @ai-sdk/openai
We use the Vercel AI SDK with the OpenAI provider as an example.
Create an env file called .env.local:
.env.local
NEXT_PUBLIC_LAYERCODE_AGENT_ID=
LAYERCODE_API_KEY=
LAYERCODE_WEBHOOK_SECRET=
OPENAI_API_KEY=
Fill in the agent ID, API key, and webhook secret you grabbed earlier. Also put in your OpenAI API key.Create a layercode-config.json file in the project root:
layercode-config.json
{
  "welcome_message":"Hey, congrats on setting up Layercode.",
  "prompt": "You are having a spoken conversation. Your responses will be read aloud by a text-to-speech system. Speak naturally, as if talking to a friend.\\n\\nStyle:\\n- Use simple words and short sentences.\\n- Never use bullet points, numbered lists, or formatted text.\\n- Avoid parentheses, brackets, or quotation marks in speech.\\n- If you must mention a special character, spell it out.\\n- Never include emojis.\\n- Default to 1–2 sentences; offer detail only on request.\\n\\nSound human:\\n- Start with brief acknowledgments (e.g., Got it, Okay).\\n- Use occasional informal markers (so, actually, oh) and mild hesitations (um, uh, hmm) when thinking.\\n- Small, natural repetitions are okay (I— I think; it's... it's fine).\\n\\nNumbers and formatting (speak, don't read symbols):\\n- Phone: (555) 123-4567 -> five five five, one two three, four five six seven\\n- Money: $19.99 -> nineteen dollars and ninety-nine cents\\n- Dates: 02/14/2025 -> February fourteenth, twenty twenty-five\\n- Times: 3:30 PM -> three thirty in the afternoon\\n- Email: john@company.com -> john at company dot com\\n- Units: 5GB -> five gigabytes\\n- Fractions: 2/3 -> two-thirds\\n- URLs: example.com/docs -> example dot com slash docs\\n- File paths: C:\\Users\\Documents -> C drive, users folder, documents folder\\n\\nInteraction:\\n- If you need time to process or call tools, first say: Let me check that for you..., then continue.\\n- If interrupted, stop and say: Oh, sorry— go ahead.\\n- Keep apologies to a minimum (max one per conversation). Prefer action: Let me fix that.\\n- Be concise; avoid encyclopedia-style monologues.\\n\\nPersonality:\\n- Friendly and efficient. Use casual positives: great, awesome, perfect!\\n- Keep energy via short exclamations, not long speeches."
}
You can tweak the prompt and welcome_message here to change how the agent greets and responds.

App Router

Inside /app, create an api folder as well two subfolders authorize and agent:
mkdir -p app/api/authorize app/api/agent
Then create the files for he routes.
touch app/api/authorize/route.ts app/api/agent/route.ts
Inside app/api/authorize/route.ts paste:
app/api/authorize/route.ts
export const dynamic = 'force-dynamic';
import { NextResponse } from 'next/server';

// Returns a client_session_key so the browser can connect to the Layercode agent
export const POST = async (request: Request) => {
  const endpoint = 'https://api.layercode.com/v1/agents/web/authorize_session';
  const apiKey = process.env.LAYERCODE_API_KEY;
  if (!apiKey) throw new Error('LAYERCODE_API_KEY is not set.');

  const requestBody = await request.json();
  if (!requestBody?.agent_id) throw new Error('Missing agent_id in request body.');

  const response = await fetch(endpoint, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      Authorization: `Bearer ${apiKey}`,
    },
    body: JSON.stringify(requestBody),
  });

  if (!response.ok) {
    const text = await response.text();
    return NextResponse.json({ error: text || response.statusText }, { status: response.status });
  }
  return NextResponse.json(await response.json());
};
Inside app/api/agent/route.ts paste:
app/api/agent/route.ts
export const dynamic = 'force-dynamic';

import { createOpenAI } from '@ai-sdk/openai';
import { streamText, ModelMessage } from 'ai';
import { streamResponse, verifySignature } from '@layercode/node-server-sdk';
import config from '@/layercode-config.json';

const openai = createOpenAI({ apiKey: process.env.OPENAI_API_KEY! });
const sessionMessages = {} as Record<string, ModelMessage[]>;

const SYSTEM_PROMPT = config.prompt;
const WELCOME_MESSAGE = config.welcome_message;

export const POST = async (request: Request) => {
  // Verify the request is from Layercode
  const requestBody = await request.json();
  const signature = request.headers.get('layercode-signature') || '';
  const secret = process.env.LAYERCODE_WEBHOOK_SECRET || '';
  const isValid = verifySignature({ payload: JSON.stringify(requestBody), signature, secret });
  if (!isValid) return new Response('Unauthorized', { status: 401 });

  const { session_id, text, type } = requestBody;

  const messages = sessionMessages[session_id] || [];

  if (type === 'session.start') {
    return streamResponse(requestBody, async ({ stream }) => {
      stream.tts(WELCOME_MESSAGE);
      messages.push({ role: 'assistant', content: WELCOME_MESSAGE });
      sessionMessages[session_id] = messages;
      stream.end();
    });
  }

  if (type === 'session.update' || type === 'session.end') {
    return new Response('OK', { status: 200 });
  }

  messages.push({ role: 'user', content: text });

  return streamResponse(requestBody, async ({ stream }) => {
    const { textStream } = streamText({
      model: openai('gpt-4o-mini'),
      system: SYSTEM_PROMPT,
      messages,
      onFinish: async ({ text }) => {
        messages.push({ role: 'assistant', content: text });
        sessionMessages[session_id] = messages;
        stream.end();
      },
    });

    stream.data({ aiIsThinking: true });
    await stream.ttsTextStream(textStream);
  });
};

Frontend

Disable React Strict Mode for Development: React Strict Mode renders components twice in development, which causes the Layercode voice agent hook to initialize twice. This can create duplicate sessions (you may hear the agent speak twice). Temporarily disable Strict Mode in next.config.js while developing.
next.config.ts
import type { NextConfig } from "next";

const nextConfig: NextConfig = {
  reactStrictMode: false // to prevent double audio
};

export default nextConfig;
Create a ui folder with the following components:
mkdir -p app/ui
touch app/ui/AudioVisualization.tsx app/ui/ConnectionStatusIndicator.tsx app/ui/VoiceAgent.tsx app/ui/MicrophoneIcon.tsx app/ui/MicrophoneButton.tsx
A component for visualising the audio.
app/ui/AudioVisualization.tsx
export function AudioVisualization({ amplitude }: { amplitude: number }) {
  // Calculate the height of each bar based on amplitude
  const maxHeight = 36; // Maximum height in pixels (reduced to avoid overflow)
  const minHeight = 6; // Minimum height in pixels

  // Create multipliers for each bar to make middle bars taller
  const multipliers = [0.2, 0.5, 1.0, 0.5, 0.2];

  // Boost amplitude by 5 and ensure it's between 0 and 1 (less aggressive)
  const normalizedAmplitude = Math.min(Math.max(amplitude * 5, 0), 1);

  return (
    <div className="flex items-center gap-[2px] h-8 pl-2 overflow-hidden">
      {multipliers.map((multiplier, index) => {
        // Calculate height based on amplitude, multiplier and min/max constraints
        const height = minHeight + normalizedAmplitude * maxHeight * multiplier;

        return (
          <div
            key={index}
            className="bg-[#FF5B41] dark:bg-[#FF7B61] w-1.5 rounded-sm transition-all duration-75"
            style={{ height: `${height}px` }}
          />
        );
      })}
    </div>
  );
}
A component for telling us if we’re connected or not
app/ui/ConnectionStatusIndicator.tsx
export function ConnectionStatusIndicator({ status }: { status: string }) {
  return (
    <div className="justify-self-start flex items-center gap-2 bg-white dark:bg-gray-800 sm:px-3 p-1 rounded-full shadow-sm dark:shadow-gray-900/30">
      <div
        className={`w-3 h-3 rounded-full ${status === 'connected' ? 'bg-green-500' : status === 'connecting' ? 'bg-yellow-500' : 'bg-red-500'
          }`}
      />
      <span className="text-sm text-gray-700 dark:text-gray-300 hidden sm:block">
        {status === 'connected'
          ? 'Connected'
          : status === 'connecting'
            ? 'Connecting...'
            : status === 'error'
              ? 'Connection Error'
              : 'Disconnected'}
      </span>
    </div>
  );
}
A microphone icon
app/ui/MicrophoneIcon.tsx
export const MicrophoneIcon = () => (
  <svg
    xmlns="http://www.w3.org/2000/svg"
    width="20"
    height="20"
    viewBox="0 0 24 24"
    fill="none"
    stroke="currentColor"
    strokeWidth="2"
    strokeLinecap="round"
    strokeLinejoin="round"
  >
    <path d="M12 2a3 3 0 0 0-3 3v7a3 3 0 0 0 6 0V5a3 3 0 0 0-3-3Z" />
    <path d="M19 10v2a7 7 0 0 1-14 0v-2" />
    <line x1="12" x2="12" y1="19" y2="22" />
  </svg>
);

A microphone button
app/ui/MicrophoneButton.tsx
'use client';

import { MicrophoneIcon } from './MicrophoneIcon';

export function MicrophoneButton() {
  return (
    <div className="relative">
      <div className="flex items-center justify-center select-none text-gray-600 dark:text-gray-300">
        <MicrophoneIcon />
      </div>
    </div>
  );
}
And a main file where most of the logic is
app/ui/VoiceAgent.tsx
'use client';

import { useLayercodeAgent } from '@layercode/react-sdk';
import { AudioVisualization } from './AudioVisualization';
import { ConnectionStatusIndicator } from './ConnectionStatusIndicator';
import { MicrophoneButton } from './MicrophoneButton';

export default function VoiceAgent() {
  const { userAudioAmplitude, agentAudioAmplitude, status } = useLayercodeAgent({
    agentId: process.env.NEXT_PUBLIC_LAYERCODE_AGENT_ID!,
    authorizeSessionEndpoint: '/api/authorize',
    onDataMessage: (data) => {
      console.log('Received data msg', data);
    },
  });

  return (
    <div className="fixed bottom-4 w-full px-8 grid grid-cols-3 items-center z-50">
      <ConnectionStatusIndicator status={status} />
      <div className="justify-self-center flex gap-4 items-center rounded-full border border-gray-100 dark:border-gray-900 py-2 pr-2 pl-3 bg-white dark:bg-gray-950 shadow-md dark:shadow-gray-900/30">
        <AudioVisualization amplitude={Math.max(userAudioAmplitude, agentAudioAmplitude)} />
        <MicrophoneButton />
      </div>
    </div>
  );
}
Finally, we also need to update the page component to import VoiceAgent
app/page.tsx
'use client';
import dynamic from 'next/dynamic';

const VoiceAgent = dynamic(() => import('./ui/VoiceAgent'), { ssr: false });

export default function Home() {
  return (
    <div className="w-full min-h-[80vh] flex items-center justify-center">
      <VoiceAgent />
    </div>
  );
}
Now you have done all the setup and you can run your app locally with:
npm run dev
Right now you’re app will be displaying an error. That’s because we haven’t told Layercode where to send the voice it transcribes.But, make a note of the port number.

Set up a local tunnel and save it in the Layercode dashboard

Now, you need to tunnel your endpoint to the web so that Layercode can reach it.We recommend ngrok. Here is more information on tunnelling.First install ngrok: https://ngrok.com/downloads/Then run:
ngrok http YOUR_APPS_PORT_NUMBER
ngrok gives you a URL (e.g., https://8bbf4104a752.ngrok-free.app).Append /api/agent to it e.g. https://8bbf4104a752.ngrok-free.app/api/agentTake that URL and go to your Layerode agent dashboard at https://dash.layercode.com/Click Connect your backend, then paste that URL into Webhook URL and press Connect.If you haven’t already, note down the webhook secret and put it into your .env.local as LAYERCODE_WEBHOOK_SECRET.Now, refresh your app and test it out. It should work—but if you get stuck, please email us!
Deploying to production? Update the Webhook URL to your production domain. See Deploying your app.

Next Steps

Congratulations! Now we recommend tweaking the prompt or welcome_message in layercode-config.json to customize your agent.
layercode-config.json
  "prompt": "You are having a spoken conversation..."
Prefer a full reference implementation? See the complete repo: layercodedev/example-fullstack.