Implement the Layercode Webhook SSE API in your FastAPI backend.
This guide shows you how to implement the Layercode Webhook SSE API in a Python backend using FastAPI. You’ll learn how to set up a webhook endpoint that receives transcribed messages from the Layercode voice pipeline and streams the agent’s responses back to the frontend, to be turned into speech and spoken back to the user. You can test your backend using the Layercode dashboard playground or by following the Build a Web Voice Agent guide.
Example code: layercodedev/example-backend-fastapi
pip:
uv:
Edit your .env environment variables. You’ll need to add:
GOOGLE_GENERATIVE_AI_API_KEY
- Your Google AI API keyLAYERCODE_WEBHOOK_SECRET
- Your Layercode pipeline’s webhook secret, found in the Layercode dashboard (go to your pipeline, click Edit in the Your Backend Box and copy the webhook secret shown)LAYERCODE_API_KEY
- Your Layercode API key found in the Layercode dashboard settingsHere’s an example of a our Layercode webhook endpoint, which generates responses using Google Gemini and streams them back to the frontend as SSE events. See the GitHub repo for the full example.
client_session_key
(and optionally a session_id
) to the frontend. This key is required for the frontend to establish a secure WebSocket connection to Layercode.See the GitHub repo for the full example.
Start your FastAPI server with Uvicorn:
In the Layercode dashboard, go to your pipeline settings. Under Your Backend, click edit, and here you can set the URL of the webhook endpoint.
If running this example locally, setup a tunnel (we recommend cloudflared which is free for dev) to your localhost so the Layercode webhook can reach your backend. Follow our tunnelling guide.
There are two ways to test your voice agent:
Implement the Layercode Webhook SSE API in your FastAPI backend.
This guide shows you how to implement the Layercode Webhook SSE API in a Python backend using FastAPI. You’ll learn how to set up a webhook endpoint that receives transcribed messages from the Layercode voice pipeline and streams the agent’s responses back to the frontend, to be turned into speech and spoken back to the user. You can test your backend using the Layercode dashboard playground or by following the Build a Web Voice Agent guide.
Example code: layercodedev/example-backend-fastapi
pip:
uv:
Edit your .env environment variables. You’ll need to add:
GOOGLE_GENERATIVE_AI_API_KEY
- Your Google AI API keyLAYERCODE_WEBHOOK_SECRET
- Your Layercode pipeline’s webhook secret, found in the Layercode dashboard (go to your pipeline, click Edit in the Your Backend Box and copy the webhook secret shown)LAYERCODE_API_KEY
- Your Layercode API key found in the Layercode dashboard settingsHere’s an example of a our Layercode webhook endpoint, which generates responses using Google Gemini and streams them back to the frontend as SSE events. See the GitHub repo for the full example.
client_session_key
(and optionally a session_id
) to the frontend. This key is required for the frontend to establish a secure WebSocket connection to Layercode.See the GitHub repo for the full example.
Start your FastAPI server with Uvicorn:
In the Layercode dashboard, go to your pipeline settings. Under Your Backend, click edit, and here you can set the URL of the webhook endpoint.
If running this example locally, setup a tunnel (we recommend cloudflared which is free for dev) to your localhost so the Layercode webhook can reach your backend. Follow our tunnelling guide.
There are two ways to test your voice agent: