Skip to main content
Layercode gives you everything you need to run post-call workflows: the webhook events that tell you when a call has ended, REST endpoints to fetch transcript data, and download URLs for finished recordings.

1. Subscribe to the right webhook events

Enable the session.end and session.update events on your agent webhook configuration. Layercode sends session.end immediately after the call finishes with usage metrics and the full transcript, and follows up later with session.update when the recording file is ready (if session recording is enabled for your org). Your webhook handler should capture both payloads. A minimal example in Next.js:
import type { NextRequest } from 'next/server'

export async function POST(req: NextRequest) {
  const payload = await req.json()

  if (payload.type === 'session.end') {
    // Persist transcript + metrics for analytics or QA queues.
    // For example, insert payload.transcript rows into your database along with
    // the latency + duration stats so dashboards and QA reviewers can query the
    // conversation later.
  }

  if (payload.type === 'session.update' && payload.recording_status === 'completed') {
    // Recording is ready—download the file and kick off downstream processing.
    // For example, stream payload.recording_url into your storage bucket (S3,
    // GCS, etc.) and enqueue summarization or compliance jobs that work off the
    // stored WAV file.
  }

  return new Response('ok')
}

2. Fetch full session details on demand

While the session.end payload already includes the transcript, you can always fetch the authoritative record later through the REST API:
curl -H "Authorization: Bearer $LAYERCODE_API_KEY" \
  https://api.layercode.com/v1/agents/AGENT_ID/sessions/SESSION_ID
The response returns connection timing, phone metadata, transcript entries, and the current recording status. Once the status is completed, the payload includes a recording_url that points to the downloadable WAV file.【F:docs/api-reference/rest-api.mdx†L321-L366】

3. Download the call recording when it finishes

When you receive a session.update webhook indicating recording_status: "completed", stream the audio file directly from the recording endpoint:
curl -L -H "Authorization: Bearer $LAYERCODE_API_KEY" \
  -o session.wav \
  https://api.layercode.com/v1/agents/AGENT_ID/sessions/SESSION_ID/recording
Layercode returns a WAV file for completed sessions and reports recording_status: "in_progress" if processing is still happening.【F:docs/api-reference/rest-api.mdx†L373-L401】

4. Kick off your analytics pipeline

With transcripts saved and recordings queued, you can start whatever analysis you need—summaries, compliance checks, quality scoring, or AI-powered tagging. A common pattern is:
  1. Store the transcript rows in your database when session.end arrives.
  2. Trigger asynchronous jobs from session.update that download the recording and push it to transcription review, summarization, or storage.
  3. Merge results (e.g., LLM summaries, compliance flags, sentiment) back into your customer dashboard once processing completes.

Example: summarize transcripts with the Vercel AI SDK

Once the transcript is stored, you can enrich it with an LLM call that produces business-ready insights. The snippet below shows how to send the transcript text to an OpenAI model using the Vercel AI SDK and extract a structured summary, caller name, and sentiment flag:
import { generateObject } from 'ai'
import { openai } from '@ai-sdk/openai'

type PostCallInsights = {
  summary: string
  intent:string
  follow_ups: string | null
  customerName: string | null
  sentiment: 'happy' | 'frustrated' | 'neutral'
}

export async function analyzeSessionTranscript(transcript: string) {
  const { object } = await generateObject<PostCallInsights>({
    model: openai('gpt-4o-mini'),
    prompt: `You are analyzing an AI voice recording for an out of hours plumbing service.\n\nTranscript:\n${transcript}\n\nSummarize the call in 2 sentences, capture the caller's first name if stated, their call intent, follow ups (if applicable) and classify whether the caller felt happy, frustrated, or neutral by the end of the call.`,
    schema: {
      type: 'object',
      properties: {
        summary: { type: 'string' },
        customerName: { type: ['string', 'null'] },
        sentiment: {
          type: 'string',
          enum: ['happy', 'frustrated', 'neutral'],
        },
      },
      required: ['summary', 'sentiment'],
    },
  })

  return object
}
Call analyzeSessionTranscript after you persist the transcript rows so downstream dashboards and QA tools can display the summary alongside the original conversation.【F:docs/api-reference/rest-api.mdx†L321-L366】