Documentation Index
Fetch the complete documentation index at: https://docs.orxhestra.com/llms.txt
Use this file to discover all available pages before exploring further.
Define your entire agent team in a single YAML file. The Composer parses the spec and builds a live agent tree - no Python wiring needed.
pip install orxhestra[composer]
Quick Start
defaults:
model:
provider: anthropic
name: claude-opus-4-7
agents:
assistant:
type: llm
instructions: You are a helpful assistant.
main_agent: assistant
from orxhestra.composer import Composer
agent = Composer.from_yaml("orx.yaml")
async for event in agent.astream("Hello!"):
if event.is_final_response():
print(event.text)
Drops you into the interactive REPL with your YAML agent as the root. orx orx.yaml --serve -p 9000
Exposes the agent tree as an A2A v1.0 JSON-RPC endpoint on :9000.
Architecture
The Composer is built around three modular registries under builders/:
composer/
├── __init__.py # Public API: Composer, register_*
├── composer.py # Orchestrates YAML → agent tree
├── errors.py # ComposerError, CircularReferenceError
├── schema.py # Pydantic models for YAML validation
└── builders/
├── agents/ # Agent builder registry
│ ├── __init__.py # register(), get(), Helpers, BuildFn protocol
│ ├── _common.py # Shared LLM + composite builder logic
│ ├── llm.py # LlmAgent builder
│ ├── react.py # ReActAgent builder
│ ├── sequential.py # SequentialAgent builder (via build_composite)
│ ├── parallel.py # ParallelAgent builder (via build_composite)
│ ├── loop.py # LoopAgent builder (via build_composite)
│ └── a2a.py # A2AAgent builder
├── models/ # Model provider registry
│ └── __init__.py # register(), create()
└── tools/ # Tool-type registry + built-in tools
└── __init__.py # register_builtin(), register_tool_resolver(), resolve_*()
Each registry is independently extensible — register a custom agent type, model provider, built-in tool, or whole new tool type without touching any other code.
YAML Schema
defaults
Global settings inherited by all agents.
defaults:
model:
provider: anthropic # openai | anthropic | google | dotted.import.path
name: claude-opus-4-7
temperature: 0.7 # optional
max_iterations: 10
models
Named model configurations. Agents reference them by name instead of repeating inline config.
models:
fast:
provider: openai
name: gpt-5.4-mini
temperature: 0.0
smart:
provider: anthropic
name: claude-opus-4-7
max_tokens: 8192
api_key: "sk-ant-..."
local:
provider: "myapp.models.OllamaChat" # dotted import path
name: llama3
base_url: "http://localhost:11434"
Any key beyond provider, name, and temperature is forwarded directly to the LangChain model constructor (e.g. max_tokens, api_key, base_url, timeout, etc.).
Agents reference models by name:
agents:
researcher:
type: llm
model: smart # reference to models section
summarizer:
type: llm
model: fast # different model for different agent
Or use inline model: for one-off overrides:
agents:
bot:
type: llm
model:
provider: openai
name: gpt-5.4
max_tokens: 4096
Named tool definitions referenced by agents. Every tool entry must set exactly one of function, mcp, builtin, agent, transfer, or custom.
tools:
# Python function by import path
search:
function: "myapp.tools.search_web"
# MCP server
weather:
mcp:
url: "http://localhost:8001/mcp"
# Built-in tools
exit:
builtin: exit_loop
fs:
builtin: filesystem # ls, read_file, write_file, edit_file, mkdir, glob, grep
sh:
builtin: shell # shell_exec, shell_exec_background
# Agent as tool (AgentTool)
researcher:
agent: ResearchAgent
# Transfer routing (hand off to one of several targets)
router:
transfer:
targets: [sales, support]
# Custom tool type — dispatched via register_tool_resolver()
notifier:
custom:
type: webhook # must match a registered resolver
url: https://example.com/notify
name: notify
The custom: type is the escape hatch for third-party tool kinds. Register a resolver with register_tool_resolver("webhook", my_resolver) and the composer will dispatch custom.type == "webhook" entries to your callable. See Extending.
skills
Named skill definitions, referenced by agents. Each agent that lists skills gets its own in-memory skill store with list_skills and load_skill tools.
Skills can be defined inline, loaded from a directory (Agent Skills Protocol), or fetched from a FastMCP server.
skills:
# Inline skill (content defined directly)
summarize:
name: summarize
description: "Summarize text into bullet points."
content: "Extract 3-5 key points. Be concise."
# Directory skill (Agent Skills Protocol - agentskills.io)
code_review:
name: code-review
directory: .agents/skills/code-review
# Remote skill (loaded from FastMCP server at build time)
pdf_processing:
name: pdf-processing
description: "Process and extract data from PDFs."
mcp:
url: "http://localhost:8001/mcp"
# Remote skill (in-memory FastMCP server)
coding_standards:
name: coding-standards
description: "Team coding standards."
mcp:
server: "myapp.skills.server" # dotted import path
Directory skills are loaded via scan_skill_directory() and support the full 3-tier progressive disclosure model with load_skill_resource for on-demand resource files. See Skills for details.
Agents reference skills by name:
agents:
triage:
type: llm
instructions: "Route to the right specialist."
skills:
- summarize
- pdf_processing
agents
Flat dict of agent definitions. Agents reference each other by name.
agents:
MyAgent:
type: llm # llm | react | sequential | parallel | loop | a2a
description: "..."
instructions: |
System prompt goes here.
model: smart # reference to models section, or inline:
# model:
# provider: anthropic
# name: claude-sonnet-4-6
# max_tokens: 4096
tools:
- search # named reference
- builtin: exit_loop # inline definition
- agent: OtherAgent # inline AgentTool
- transfer: # transfer routing
targets: [A, B]
skills:
- summarize # named reference to skills section
planner:
type: task # plan_react | task
tasks:
- title: "Research"
max_iterations: 10
Agent Types
| Type | Description | Required fields |
|---|
llm | LLM-powered agent with tool loop | instructions |
react | Structured Reason+Act agent | instructions (optional) |
sequential | Run sub-agents in order | agents |
parallel | Run sub-agents concurrently | agents |
loop | Repeat sub-agents until done | agents, max_iterations |
a2a | Remote agent via A2A protocol | url |
The composer’s schema validator enforces the Required fields column at YAML-parse time:
a2a agents must set url.
sequential / parallel / loop agents must set a non-empty agents list.
It also emits warnings.warn for fields that are silently ignored by an agent type (e.g. tools on a composite, model on an a2a, planner on non-LLM types). Running with -W error turns those into hard failures.Custom agent types registered via register_builder are exempt from both checks — they’re free to consume any field.
main_agent
The entry-point agent name.
runner
Optional. Enables Composer.runner_from_yaml().
runner:
app_name: my-app
session_service: memory # or dotted.import.path
compaction: # optional: auto-compact long sessions
char_threshold: 100000 # compact when content exceeds ~25k tokens
retention_chars: 20000 # keep ~5k tokens of recent events raw
When compaction is set, the Runner automatically summarizes old session events after each invocation based on character count (not event count). Uses the default model for LLM-based summarization. See Session Compaction for details.
server
Optional. Enables Composer.server_from_yaml() for A2A.
server:
app_name: my-app
version: "1.0.0"
url: "http://localhost:8000"
skills:
- id: general
name: General
description: "General assistant."
Identity, trust, and attestation
Three optional blocks wire the trust layer into the Runner: sign every emitted event, verify incoming ones against a policy, and audit the whole stream through an attestation provider. Every block is opt-in — leave them out and the Runner behaves exactly as before.
Requires the auth extra:pip install 'orxhestra[composer,auth]'
identity: — Ed25519 signing key
Attaches a signing identity to every agent under the Runner. When set, every Event the tree emits is signed with Ed25519 over the canonical event payload (including prev_signature to form a hash chain), and verifiers downstream can prove the event hasn’t been tampered with.
identity:
signing_key: ./keys/agent.key # created via `orx identity init`
encryption_password: ${ORX_KEY_PASSWORD} # optional — Fernet-wraps the key at rest
did_method: key # "key" (offline, derived) or "web"
did: did:web:example.com:agents:researcher # required when did_method=web
Generate a key with the CLI:orx identity init --path ./keys/agent.key
# → wrote identity to ./keys/agent.key
# → did: did:key:z6Mk...
Then either declare it in YAML as above, or pass it on the command line: orx orx.yaml --identity ./keys/agent.key.
trust: — signature verification policy
Installs TrustMiddleware on the Runner. Requires an identity: block (verification needs keys).
trust:
mode: strict # "strict" drops failures; "permissive" annotates them
trusted_dids: # allowlist — empty means "anyone valid"
- did:key:z6Mk...trusted-peer
denied_dids: [] # denylist; takes precedence over trusted_dids
require_chain: true # enforce hash-chain continuity per branch
allow_unsigned: false # when false, every event must be signed
In strict mode, events that fail verification are dropped from the stream. In permissive mode they’re passed through with event.metadata["trust"] = {"verified": False, "reason": ...} so downstream consumers can flag them.
attestation: — claim issuance + audit log
Installs AttestationMiddleware on the Runner. Every event is appended to the provider’s audit log; notable actions (agent transfers, tool invocations) also produce typed claims.
attestation:
provider: local # "noop" (default), "local", or a dotted import path
path: ./audit # required when provider == "local"
Three provider flavors ship in-box:
noop — records nothing. Matches the default when no attestation: block is present.
local — JSON-on-disk at path, SHA-256 hash-chained, every entry signed with the identity: key. Zero external deps.
<dotted.import.path> — anything else is treated as an import path to a user-supplied AttestationProvider. Plug in a vendor SDK, a blockchain anchor, or your own implementation.
attestation:
provider: myorg.attest.VendorProvider # user-supplied implementation
See the AttestationProvider protocol for the four-method interface your adapter needs to satisfy: issue_claim, verify_claim, append_audit, revoke.
Examples
Transfer Routing
agents:
sales:
type: llm
description: "Order inquiries."
tools: [lookup_order]
support:
type: llm
description: "Technical help."
tools: [search_docs]
triage:
type: llm
instructions: "Route to the right specialist."
tools:
- transfer:
targets: [sales, support]
main_agent: triage
Sequential Pipeline
agents:
researcher:
type: llm
instructions: "Research the topic."
tools: [search]
writer:
type: llm
instructions: "Write an article from research."
pipeline:
type: sequential
agents: [researcher, writer]
main_agent: pipeline
Loop with Exit
agents:
writer:
type: llm
instructions: "Write a draft."
reviewer:
type: llm
instructions: "Review. Call exit_loop if approved."
tools:
- builtin: exit_loop
loop:
type: loop
agents: [writer, reviewer]
max_iterations: 5
main_agent: loop
Extending with Registries
Custom Agent Types
from orxhestra.composer import register_builder
async def build_custom(name, agent_def, spec, *, helpers):
model_cfg = helpers.resolve_model(agent_def)
tools = await helpers.resolve_tools(agent_def)
return MyCustomAgent(name=name, tools=tools)
register_builder("custom", build_custom)
Then in YAML:
agents:
my_agent:
type: custom
The build function receives (name, agent_def, spec, *, helpers) where helpers provides:
helpers.resolve_model(agent_def) - merge agent/default model config
helpers.resolve_tools(agent_def) - resolve all tool references
helpers.build_agent(name) - recursively build a sub-agent by name
Custom Model Providers
from orxhestra.composer import register_provider
register_provider("my_llm", MyCustomChatModel)
Built-in providers: openai, anthropic, google. Any unrecognized provider string is treated as a dotted import path to a custom BaseChatModel class.
from orxhestra.composer import register_builtin_tool
register_builtin_tool("my_tool", lambda: my_tool_instance)
Then reference it in YAML:
tools:
my_tool:
builtin: my_tool
The five built-in ToolDef shapes (function, mcp, builtin, agent, transfer) cover most needs — but if you need a whole new kind of tool (a webhook, an HTTP RPC, a proprietary bus), reach for the custom: field + a resolver.
from langchain_core.tools import Tool
from orxhestra.composer import register_tool_resolver
def build_webhook(config: dict) -> Tool:
"""Resolver for `custom.type == "webhook"`.
The full ``custom`` dict (including the ``type`` key) is passed in.
Can be sync or async — the composer awaits whatever the resolver returns.
"""
url = config["url"]
return Tool.from_function(
name=config.get("name", "webhook"),
description="POST JSON to a webhook and return the response text.",
func=lambda body: httpx.post(url, json=body).text,
)
register_tool_resolver("webhook", build_webhook)
tools:
notifier:
custom:
type: webhook # dispatches to build_webhook
url: https://example.com/notify
name: notify
description: "Send a payload to the notify webhook."
Async resolvers work transparently — the composer awaits whichever shape you return.
Python API
| Method | Returns | Description |
|---|
Composer.from_yaml(path) | BaseAgent | Parse YAML and build the root agent (sync) |
Composer.from_yaml_async(path) | BaseAgent | Same, async |
Composer.runner_from_yaml(path) | Runner | Build a Runner with sessions + trust middleware |
Composer.runner_from_yaml_async(path) | Runner | Same, async |
Composer.server_from_yaml(path) | FastAPI | Build an A2A server app |
Composer.server_from_yaml_async(path) | FastAPI | Same, async |
Composer(spec).build() | BaseAgent | Public build hook for callers that need to mutate spec between construction and build |
Composer(spec).build_runner(root) | Runner | Wrap a built tree in a Runner |
Composer(spec).build_server(root) | FastAPI | Wrap a built tree in an A2A app |
| Registry function | Description |
|---|
register_builder(type, fn) | Add a custom agent type builder |
register_provider(name, cls) | Add a custom model provider |
register_builtin_tool(name, factory) | Add a custom built-in tool |
register_tool_resolver(type, resolver) | Add a new tool type accessible via tools: { custom: { type: ... } } |