Host Functions

Host functions are the only way a Chidori agent interacts with anything outside its own code. They are the one and only side-effect boundary — every call is logged, cacheable, and replayable. This page is the reference.

prompt(text, **kwargs) → string | dict

Send a message to an LLM. Returns the response as a string, or as a parsed dict/list when format="json".

# Simple
answer = prompt("What is 2+2?")

# Full options
result = prompt(
    "Analyze this data: " + data,
    model       = "claude-opus",
    temperature = 0.2,
    max_tokens  = 4000,
    system      = "You are an analyst.",
    format      = "json",
    tools       = ["web_search", "calc"],
    max_turns   = 5,
)

When tools=[...] is set, the LLM can autonomously invoke those tools; max_turns caps the tool-use loop.

template(template, **kwargs) → string

Render a Jinja2 template with minijinja. Accepts either an inline string or a path ending in .jinja / .j2:

text = template("Hello {{ name }}!", name = "Alice")
text = template("prompts/analysis.jinja", items = items, role = "analyst")

answer = prompt(template("prompts/research.jinja", topic = topic, sources = sources))

Supports the full Jinja feature set: conditionals, loops, filters, includes, inheritance.

tool(name, **kwargs) → any

Invoke a registered tool. Tools live in tools/*.star — each file exports one function, and its name becomes the tool name:

results = tool("web_search", query = "rust programming", max_results = 5)
data    = tool("fetch_url", url = "https://example.com")

agent(name, **kwargs) → any

Call another agent as a sub-agent. The name resolves to agents/<name>.star:

summary  = agent("summarizer", document = long_text)
checked  = agent("fact_checker", claims = summary)

parallel(fns) → list

Run a list of functions concurrently, preserving result order:

results = parallel([
    lambda q = q: tool("web_search", query = q)
    for q in queries
])

input(message, **kwargs) → string

Pause execution and wait for a human response. Used for approval flows, clarification, or any human-in-the-loop step. When combined with chidori serve, the runtime saves a checkpoint and suspends the session until the human responds.

approved = input("Proceed with this plan?", context = plan, timeout = 300)
if approved == "yes":
    execute(plan)

exec(code, **kwargs) → any

Run arbitrary code in a sandboxed WASM runtime. Typically used to execute code an LLM generated at runtime.

code = prompt("Write Python to analyze this CSV:\n" + data, system = "Return only code.")
result = exec(code, lang = "python", vars = {"data": data}, timeout = 30)

The sandbox has no filesystem, network, or host access — data enters via vars and exits as the return value.

http(method, url, **kwargs) → dict

Make an HTTP request:

resp = http("GET", "https://api.example.com/data",
    headers = {"Authorization": "Bearer " + token},
)
resp = http("POST", "https://api.example.com/submit",
    json = {"key": "value"},
)

Parameters: headers, json, params (query string).

memory(action, **kwargs) → any

Persistent storage that survives across agent runs. Supports key-value and semantic search:

memory("store",  key = "user_pref", value = "dark mode")
pref = memory("get", key = "user_pref")

# Semantic search
results = memory("search", query = "What does the user prefer?", top_k = 3)

log(message, **kwargs) → None

Structured logging. Appears in traces and stderr under --verbose:

log("Processing batch", count = len(items), batch_id = id)

env(name) → string | None

Read an environment variable. Returns None if unset:

api_key = env("MY_API_KEY")

retry(fn, **kwargs) → any

Retry a function on failure with configurable backoff:

result = retry(
    lambda: prompt("Translate to French:\n" + text),
    max_attempts = 3,
    backoff = "exponential",   # "constant" | "linear" | "exponential"
)

try_call(fn) → result

Execute a function and capture errors as values instead of raising. Starlark has no exceptions; try_call is how you do error handling:

result = try_call(lambda: tool("flaky_api", query = text))
if result.error:
    log("API failed", error = result.error)
    result = prompt("Fallback: answer this yourself:\n" + text)

The closed set

That's the complete surface. Anything not on this list cannot be done from inside a Chidori agent — which is exactly the property that makes checkpointing, replay, and deterministic testing possible.

Was this page helpful?