Skip to main content

Overview

Tracing gives you full visibility into what happened during a test call or live call. It’s one of the most powerful features for making your voice agents production-ready. With traces, you can see:
  • The exact conversation that occurred
  • What prompts were sent to the LLM
  • Which tools the LLM had access to (and which it called)
  • Speech-to-text transcriptions of user input and Transcription errors
  • Tool call requests and responses

Accessing Traces

After a call ends, you’ll see a call summary screen. Click the Traces tab to view the detailed trace log for that call. Accessing Traces

Understanding Trace Sections

Traces are organized by event type. The two most important sections to focus on are:

STT (Speech-to-Text)

These entries capture the transcription of what the user said during the call. Each STT entry shows the text conversion of the user’s speech at that point in the conversation. Example: STT: “Yeah, I would want to check what your operating hours are.”

LLM (Language Model Calls)

These entries show every call made to the LLM. Each LLM trace includes:
  • Prompt - The full prompt sent to the LLM (combination of global prompt + node-specific prompt)
  • Conversation history - The entire conversation up to that point
  • Available tools - All tools the LLM can access for that node
  • Response - What the LLM generated
LLM Response Conversation History

Tools in Traces

Each LLM call shows the tools available to the agent at that moment. These fall into three categories:

System Tools (Default)

Every node includes these by default:
  • Safe Calculator — Helps LLMs perform math accurately
  • Get Current Time — Retrieves current time
  • Convert Time — Converts time to a specific timezone

Pathway Tools

These correspond to the pathways (node transitions) you’ve configured. For example, if your node has pathways to “End Call” and “Move to Summary,” you’ll see those as available tools. The tool descriptions shown in traces match the descriptions you set in your pathway configuration.

Custom Tools

Any external tools you’ve attached to the node (e.g. any custom tools you created for API endpoints for booking, order submission, etc.) will appear here. See custom tools documentation here: https://docs.dograh.com/voice-agent/tools.

Tool Calls in Traces

When the LLM decides to call a tool, you’ll see:
  1. Tool call request — The tool name and parameters sent
  2. Tool response — The result returned (e.g., {"status": "ok"})
The tool call and response also appear in subsequent conversation history, so the LLM knows the outcome. Example flow: User: “Can you book me a table for tomorrow at 7pm?” LLM calls: book_table(date: “tomorrow”, time: “7pm”, party_size: 2) Tool response: {"status": "done", "confirmation": "Table booked"} Agent: “I’ve booked your table for tomorrow at 7pm.”

How Prompts Are Combined

The prompt sent to the LLM is a combination of:
  1. Global Prompt — Defined in your Global Node settings
  2. Node Prompt — Defined in the specific node being executed
If you disable the Global Node for a particular node, only the node-specific prompt will be used.

Key Concept: Conversation History

Every time an LLM call is made, the entire conversation history up to that point is passed to the LLM. This means:
  • The LLM always has full context
  • You can see exactly what context the LLM had when it made a decision
  • Useful for debugging unexpected responses

Setting Up Tracing in (Open Source) Self hosted Dograh AI

Langfuse Integration

We provide seamless integration with Langfuse for self-hosted Dograh deployments. Setup steps:
  1. Sign up at Langfuse and create API credentials
  2. Add the following environment variables (in docker-compose.yaml for Docker deployments):
ENABLE_TRACING="true"
LANGFUSE_SECRET_KEY=
LANGFUSE_PUBLIC_KEY=
LANGFUSE_HOST=
  1. Restart your services
Once enabled, traces will be available for every completed call in Dograh.

Quick Reference

Trace TypeWhat It Shows
STTUser speech transcribed to text
LLMPrompt + conversation + tools + response
Tool CallTool invocation and response

Tips for Using Traces

  • Debug unexpected responses — Check the LLM trace to see what prompt and context the model received
  • Verify tool calls — Confirm tools are being triggered with correct parameters
  • Refine prompts — Use traces to see how your prompt instructions affect LLM behavior
  • Check transcription accuracy — Review STT entries if the agent misunderstands users