Axio¶
Axio (Asynchronous eXtensible Intelligent Orchestration) - a minimal but complete foundation for building LLM-powered agents in Python.
Every integration point is a protocol - bring your own transport, context store, tools, and permission guards.
Install the TUI or coding assistant and start chatting in under a minute.
Write your first agent in code with the core library.
Understand the agent loop, protocols, tools, events, and the plugin system.
Step-by-step guides for writing custom tools, transports, and guards.
Overview of every package in the monorepo and their entry points.
How to draw an owl¶
import aiohttp
from axio import Tool
async def fetch(url: str) -> str:
"""Fetch the text content of a URL."""
async with aiohttp.ClientSession() as session:
async with session.get(url) as r:
return (await r.text())[:2000]
fetch_tool = Tool(name="fetch", handler=fetch)
assert fetch_tool.name == "fetch"
import asyncio
from axio import Agent, MemoryContextStore
from axio_transport_openai import OpenAITransport
async def main() -> None:
agent = Agent(
system="You are a helpful assistant.",
tools=[fetch_tool],
transport=OpenAITransport(),
)
reply = await agent.run(
"What's the weather tomorrow? Use geoip for detect my location and wttr.in for weather.",
MemoryContextStore(),
)
assert reply
asyncio.run(main())
Why Axio?¶
The name is a backronym - each letter describes what the framework actually does:
A - Asynchronous.
The agent loop is built on asyncio end-to-end. Tool calls from a single
LLM response are dispatched concurrently via asyncio.gather so results
arrive in parallel, not sequentially. Every transport, tool, and context
store uses async def throughout - no thread pools or blocking I/O hidden
beneath the surface.
X - eXtensible.
Every integration point is a runtime-checkable Protocol or abstract base class.
You can swap the transport (OpenAI, Anthropic, any custom endpoint), the context
store (in-memory, SQLite, your own database), the permission guards, and the tools
without touching a single line of framework code. The plugin system - based on
Python entry points - lets separate packages register transports, tools, and
guards that are discovered automatically at runtime. Extensible is capitalised
because it is the core design decision from which everything else follows.
I - Intelligent. The LLM drives the decision loop. Axio stays out of the way: it presents tools to the model, faithfully delivers tool results back, and keeps iterating until the model decides it is done. No hard-coded routing, no fixed decision trees - the intelligence lives in the model, not the framework.
O - Orchestration.
Axio coordinates agents, tools, context, and permissions into a coherent execution
flow. Sub-agents can be spawned and composed via the built-in subagent tool;
context stores are shared across agents; permission guards form a composable
chain that gates every tool call. Complex multi-agent workflows emerge from
simple, well-defined primitives.
- Extensible by design
Every integration point is a runtime-checkable
Protocolor ABC. Swap transports, context stores, tools, and guards without touching framework code.- Streaming by default
All LLM I/O flows through typed
StreamEventvalues. No hidden buffering - you see every token, tool call, and result as it happens.- Tools are plain async functions
Define parameters as function arguments, get JSON schema for free, implement the body for execution. Guards gate every tool call through a composable permission chain.
- Multi-agent orchestration built-in
Spawn sub-agents via the
subagenttool, share context between agents, compose complex workflows - all without external dependencies.
Architecture¶
flowchart TB
subgraph User["User Code"]
A[Agent]
end
subgraph Core["axio - Core Framework"]
B[Tool Handler<br/>async function]
C[Permission Guard<br/>Protocol]
D[StreamEvent<br/>Typed events]
end
subgraph Transport["Transport (pluggable)"]
E[OpenAI Transport]
F[Anthropic Transport]
G[Custom Transport]
end
subgraph Context["Context Store (pluggable)"]
H[MemoryContextStore]
I[SQLiteContextStore]
end
subgraph LLM["LLM Provider"]
J[OpenAI]
K[Claude]
L[Custom]
end
A -->|1. configures| B
A -->|2. uses| C
A -->|3. builds| D
A -->|4. sends to| E
A -->|5. stores in| H
E -->|SSE| D
D -->|tool call| B
B -->|result| D
D -->|text| J
J -->|response| E
E -->|reply| A
The agent loop:
You configure the agent with tools (async functions) and guards (permission chain)
User sends a message
Agent sends to transport → LLM
LLM responds with text or tool calls
Tools execute → results return → LLM generates final response
Events stream back to you (tokens, tool calls, results)
How does Axio compare?¶
Here’s how Axio compares to other popular Python agent frameworks:
Axio |
pydantic-ai |
LangChain / LangGraph |
AutoGen |
|
|---|---|---|---|---|
Architecture |
Minimal core + protocols |
Pydantic-native, validation-centric |
Heavy abstraction layer |
Multi-agent orchestration |
Streaming |
All events typed, full tool visibility |
Text streaming works; tool calls and final answer can’t stream simultaneously |
Added later, inconsistent |
Limited |
Tool definition |
Plain async function |
Decorator + function signature → auto JSON schema |
Functions + decorators |
Class-based agents |
Transport |
Pluggable protocol; OpenAI (+ any OpenAI-compatible endpoint), Anthropic, Google Gemini, Codex, or custom |
Built-in 20+ providers, trivial to swap |
Built-in, harder to swap |
Azure OpenAI focused |
Realtime / voice |
First-class via |
None |
None |
None |
Multimodal |
text, vision, audio, video input; image + video generation |
text + vision |
text + vision |
text + vision |
Multi-agent |
Built-in ( |
Agent-as-tool pattern; not the primary focus |
Via LangGraph |
Native |
Learning curve |
Low - ~100 lines for an agent |
Low for Pydantic/FastAPI users; moderate otherwise |
Medium - many abstractions |
High - complex configs |
Scope |
Agent loop + extensions + voice |
Agent + structured outputs; no RAG, no built-in memory |
Full stack (RAG, chains, etc.) |
Multi-agent scenarios |
API stability |
Beta (v0.x, breaking changes possible) |
Beta (v0.x, breaking changes possible) |
Stable |
Stable |
When to choose each¶
Choose Axio if:
You want full control over every step of the agent cycle - no hidden magic, no framework opinions baked in
You care about a lean dependency tree: every component is a separate PyPI package - install only what you need, fewer dependencies means fewer supply-chain attack risks
You need multimodal input - text, images, audio, and video are first-class content blocks throughout the whole stack
You’re building a realtime voice agent -
axio-audio+GeminiLiveTransportgive you sample-aligned duplex audio with production-grade echo cancellation, tool dispatch that doesn’t block the audio stream, and graceful interruption handlingYou need image or video generation as part of the agent loop - Gemini Nano Banana and Veo are available as ordinary tools
You need multi-agent orchestration - sub-agents, shared context stores, and composable permission chains are built in
You need custom sandboxed execution - isolated Docker containers out of the box via
axio-tools-dockerYou want a ready-made terminal coding assistant -
axio-replships with file/shell tools, streaming output, vision, and transport auto-detectionYou prefer explicit protocols over decorator-driven conventions and need to own the event loop, streaming pipeline, and permission model
Choose pydantic-ai if:
You already use Pydantic/FastAPI and want the same patterns for agents
Your agent must return strongly typed, validated structured outputs
You need trivial provider swapping across 20+ LLM backends
Choose LangChain if:
You need RAG, text splitters, and other built-in data utilities
You want batteries-included with minimal wiring code
You’re prototyping quickly and can accept abstraction overhead
Choose AutoGen if:
You’re building complex multi-agent conversation flows with a heavy Azure OpenAI focus
You need built-in support for human-in-the-loop at the framework level
Axio’s philosophy is thin abstraction over the prompt-completion loop, not a full framework with opinions about how you should structure your application. If that aligns with your needs - welcome.
Examples¶
The examples/
directory contains runnable examples:
minimal.py- core agent loop withStubTransport; no API key neededstream_tool_args.py- incremental streaming of tool call arguments with partial JSON decodingcodex_chat.py- CLI chat loop backed by a ChatGPT subscription via OAuth PKCE (no API key required)agent_swarm/- team of role-specialised agents; see the Agent Swarm guidegas_town/- multi-agent convoy following the Gas Town methodology; see the Gas Town guiderealtime_smoke/- minimal Gemini Live smoke test: send text, receive and play audiorealtime_chat/- full-featured voice chat with echo cancellation, volume metering, and interruption handling