Layercode makes it easy to build web-based voice agent applications in React. This guide walks you through a full-stack React example voice agent, letting users speak to a voice AI in their browser.

Example code: layercodedev/example-frontend-react

This frontend example is designed for use with a Layercode Hosted Backend.

Setup

To get started, you’ll need a Layercode account and a voice pipeline. If you haven’t done so yet, follow our Getting Started Guide.

Clone the example repo and install dependencies:

git clone https://github.com/layercodedev/example-frontend-react.git
cd example-frontend-react
npm install

Project structure

This project uses Vite for fast React development, Tailwind CSS for styling, and TypeScript.

How it works

Connect to a Layercode voice pipeline

We use the React SDK useLayercodePipeline hook, which handles all the complexity required for real-time, low-latency, two-way voice agent interactions.

Here’s a simplified example of how to use the React SDK in a React application:

import { useLayercodePipeline } from "@layercode/react-sdk";
import { AudioVisualization } from "./AudioVisualization";
import { ConnectionStatusIndicator } from "./ConnectionStatusIndicator";
import { MicrophoneIcon } from "../icons/MicrophoneIcon";

export default function VoiceAgent() {
  const { agentAudioAmplitude, status } = useLayercodePipeline({
    pipelineId: "your-pipeline-id",
    authorizeSessionEndpoint: "/api/authorize",
    onDataMessage: (data) => {
      console.log("Received data msg", data);
    },
  });

  return (
    <div className="w-96 h-96 border border-white rounded-lg flex flex-col gap-20 items-center justify-center">
      <h1 className="text-gray-800 text-xl font-bold">Voice Agent Demo</h1>
      <AudioVisualization amplitude={agentAudioAmplitude} height={75} />
      <div className="flex flex-col gap-4 items-center justify-center">
        <div className="h-12 px-4 rounded-full flex items-center gap-2 justify-center select-none bg-[#FF5B41]">
          <MicrophoneIcon />
        </div>
        <ConnectionStatusIndicator status={status} />
      </div>
    </div>
  );
}

The useLayercodePipeline hook accepts:

On mount, the useLayercodePipeline hook will:

  1. Make a request to your authorize session endpoint to create new session and return the client session key.
  2. Establish a WebSocket connection to Layercode (using the client session key)
  3. Capture microphone audio from the user and stream it to the Layercode voice pipeline for transcription
  4. (At this stage, Layercode will call the Hosted Backend or Your Backend webhook to generate a response, and then convert the response from text to speech)
  5. Playback audio of the voice agent’s response to the user in their browser, as it’s generated

The useLayercodePipeline hook returns an object with the following properties:

  • status: The connection status of the voice agent. You can show this to the user to indicate the connection status.
  • agentAudioAmplitude: The amplitude of the audio from the voice agent. You can use this to drive an animation when the voice agent is speaking.

By default, your voice pipeline will handle turn taking in automatic mode. But you can configure your voice pipeline to use push to talk mode. If you are using push to talk mode see the push-to-talk instructions in the repo README and read about how the VoiceAgentPushToTalk component works below.

Components

AudioVisualization

The AudioVisualization component is used to visualize the audio from the voice agent. It uses the agentAudioAmplitude value returned from the useLayercodePipeline hook to drive the height of the audio bars with a simple animation.

src/ui/AudioVisualization.tsx
export function AudioVisualization({ amplitude, height = 46 }: { amplitude: number; height?: number }) {
  // Calculate the height of each bar based on amplitude
  const maxHeight = height;
  const minHeight = Math.floor(height / 6);
  const barWidth = Math.floor(minHeight);

  // Create multipliers for each bar to make middle bars taller
  const multipliers = [0.2, 0.5, 1.0, 0.5, 0.2];

  // Boost amplitude by 7 and ensure it's between 0 and 1
  const normalizedAmplitude = Math.min(Math.max(amplitude * 7, 0), 1);

  return (
    <div className="w-auto flex items-center gap-[2px]" style={{ height: `${height}px` }}>
      {multipliers.map((multiplier, index) => {
        const barHeight = minHeight + normalizedAmplitude * maxHeight * multiplier;

        return (
          <div
            key={index}
            className="flex flex-col items-center"
            style={{
              height: `${barHeight}px`,
              width: `${barWidth}px`,
            }}
          >
            {/* Top rounded cap */}
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all duration-20"
              style={{
                width: "100%",
                height: `${barWidth}px`,
                borderTopLeftRadius: "9999px",
                borderTopRightRadius: "9999px",
              }}
            />
            {/* Middle straight section */}
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all duration-20"
              style={{
                width: "100%",
                height: `calc(100% - ${2 * barWidth}px)`,
                borderRadius: 0,
              }}
            />
            {/* Bottom rounded cap */}
            <div
              className="bg-[#FF5B41] dark:bg-[#FF7B61] transition-all duration-20"
              style={{
                width: "100%",
                height: `${barWidth}px`,
                borderBottomLeftRadius: "9999px",
                borderBottomRightRadius: "9999px",
              }}
            />
          </div>
        );
      })}
    </div>
  );
}

ConnectionStatusIndicator

The ConnectionStatusIndicator component is used to display the connection status of the voice agent. It uses the status value returned from the useLayercodePipeline hook to display the connection status.

src/ui/ConnectionStatusIndicator.tsx
export function ConnectionStatusIndicator({ status }: { status: string }) {
  return (
    <div className="justify-self-start flex items-center gap-2 bg-white dark:bg-gray-800 sm:px-3 p-1 rounded-full shadow-sm dark:shadow-gray-900/30">
      <div className={`w-3 h-3 rounded-full ${status === "connected" ? "bg-green-500" : status === "connecting" ? "bg-yellow-500" : "bg-red-500"}`} />
      <span className="text-sm text-gray-700 dark:text-gray-300 hidden sm:block">
        {status === "connected" ? "Connected" : status === "connecting" ? "Connecting..." : status === "error" ? "Connection Error" : "Disconnected"}
      </span>
    </div>
  );
}

VoiceAgentPushToTalk (optional)

Because the useLayercodePipeline hook handles all of the audio streaming and playback, in most cases the microphone button is simply a visual aid and doesn’t implement any logic. A simple microphone icon inside a circle will suffice in most cases.

Layercode does support ‘push-to-talk’ turn taking, as an alternative to automatic turn taking (read more about turn taking). When using ‘push-to-talk’ turn taking, holding down and releasing the MicrophoneButton must send a websocket message to tell Layercode the user has started and finished talking. In this example, we provide an alternative VoiceAgentPushToTalk component, that along with the MicrophoneButtonPushToTalk component, handles this logic.

To use this mode, you’ll need to edit src/App.tsx to use the VoiceAgentPushToTalk component instead of the VoiceAgent component. Then in your Layercode Dashboard, you’ll need to click Edit in the Transcription section of your voice pipeline and set the Turn Taking to Push to Talk.

import { useLayercodePipeline } from '@layercode/react-sdk';
import { AudioVisualization } from './AudioVisualization';
import { ConnectionStatusIndicator } from './ConnectionStatusIndicator';
import { MicrophoneButtonPushToTalk } from './MicrophoneButtonPushToTalk';

export default function VoiceAgentPushToTalk() {
  const { agentAudioAmplitude, status, triggerUserTurnStarted, triggerUserTurnFinished } = useLayercodePipeline({
    pipelineId: "your-pipeline-id",
    authorizeSessionEndpoint: '/api/authorize',
    onDataMessage: (data) => {
      console.log('Received data msg', data);
    },
  });

return (

<div className="w-96 h-96 border border-white rounded-lg flex flex-col gap-20 items-center justify-center">
  <h1 className="text-gray-800 text-xl font-bold">Voice Agent Demo</h1>
  <AudioVisualization amplitude={agentAudioAmplitude} height={75} />
  <div className="flex flex-col gap-4 items-center justify-center">
    <MicrophoneButtonPushToTalk triggerUserTurnStarted={triggerUserTurnStarted} triggerUserTurnFinished={triggerUserTurnFinished} />
    <ConnectionStatusIndicator status={status} />
  </div>
</div>
); }