Tracing & Visualization
Because every LLM call, tool invocation, HTTP request, and human input flows through a host function, Chidori gets a full structured trace of every run for free. You can consume that trace three ways: as JSON on stderr, as a session API payload, or as an interactive timeline in the debugger.
The call log
Every entry in a trace is a record of one host function call:
{
"seq": 3,
"function": "tool",
"args": {"name": "web_search", "query": "rust async runtime"},
"result": [{"title": "...", "url": "..."}, ...],
"duration_ms": 812,
"timestamp": "2026-04-11T21:10:27.118904Z"
}For prompt calls, the entry also includes token_usage with input/output token counts. Aggregate across sessions to find expensive paths.
1. The --trace flag
Add --trace to any chidori run command to stream the full call log to stderr as JSON. Perfect for local debugging:
chidori run agents/researcher.star --input question="What is Rust?" --trace2. The session API
When the agent runs under chidori serve, the call log is exposed over HTTP:
GET /sessions → list all sessions
GET /sessions/{id} → session result
GET /sessions/{id}/checkpoint → full call log as JSONcurl http://localhost:8080/sessions/c4cac6c7-.../checkpoint | jq '.call_log'Any log-shipping pipeline that can POST JSON can consume this — point it at a running server and you have full agent telemetry.
3. OpenTelemetry → Tael / any OTLP backend
Chidori emits standard OTLP/gRPC spans when OTEL_EXPORTER_OTLP_ENDPOINT is set. Each agent run becomes one parent agent.run span with one host.<function> child span per host function call, complete with model names, token counts, and OTEL semantic-convention attributes.
export OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4317
chidori run agents/researcher.star --input question="..."Point it at Tael for a CLI-first workflow designed for agent development, or at any OTLP-compatible backend — Jaeger, Tempo, Honeycomb, Datadog, the OpenTelemetry Collector — without code changes.
4. The Chidori Debugger
A desktop app that renders the call log as an interactive timeline:
- Timeline view — every host function call, color-coded by type, annotated with duration and token usage.
- Step into each call — see the exact prompt sent and the exact response received.
- Branch histories — edit a recorded result in place and watch the rest of the agent take a different path, without re-running upstream LLM calls.
- Multi-agent view — when one agent calls another via
agent("sub_agent", ...), the debugger nests the sub-agent's call log inside the parent's timeline.
Install it with:
cargo install chidori-debuggerPoint it at a running chidori serve instance, or load a checkpoint JSON directly from disk.
What you use tracing for
- Finding expensive paths — sort calls by
duration_msortoken_usageto see where time and money are going. - Debugging bad output — step through the exact prompts and responses that led to a wrong answer.
- Regression testing — diff a new trace against a known-good checkpoint.
- Cost reporting — sum
token_usageacross sessions per user, per feature, or per model. - Explaining agent decisions — share a checkpoint URL instead of pasting screenshots.