This guide shows how to build an AI SDK orchestrator that loops over specialized sub-agents, then pipes the final response into Layercode for voice delivery.
Prerequisites
- A Next.js (or plain Fetch) endpoint that already handles Layercode server webhooks.
- Familiarity with AI SDK streamText and JavaScript tool calling.
- (Optional) Custom webhook metadata if you want to preload caller info.
1. Define a shared persona + sub-agent config
Each sub-agent reuses a common voice persona so the caller hears a consistent tone, then narrows to a specific responsibility and tool set.policy and escalations, each with its own Zod schema + tooling.
2. Build an internal transfer tool
Transfers are just tool calls that tell the orchestrator to switch sub-agents. They never surface to the caller.system messages so the next agent instantly knows why the call switched.
3. Orchestrator loop
The orchestrator is a loop that keeps calling the same LLM with different system prompts + tools until no transfer is requested.messages per conversationId (database, KV, Durable Object, etc.) to maintain full history across turns.
4. Connect to Layercode
Wrap the orchestrator inside your Layercode webhook handler. Each webhook turn can stream TTS back to the caller.conversation_id so the orchestrator can pick up where it left off.
5. Pass caller context with metadata
Before the call even starts, you often know the lead’s name, quote ID, or campaign.Attach that information via custom webhook metadata so it arrives inside every
session.start and message payload.You can then preload the orchestrator’s history with system messages like
Lead name: Jordan Carter or seed tool inputs with the quote identifier.
Next steps
- Replace stub tool logic with real CRM / policy lookups.
- Persist orchestrator state (messages, active sub-agent, outstanding tasks) in a durable store.
- Tune prompts for stricter routing rules (e.g., escalate only when specific keywords appear).
- Add interrupt handling + barge-in support from the voice quick start.