01
02ENGINE
03CAPABILITIES
04DOWNLOAD

Stop building automations.

Start stating outcomes.

Four original protocols. ~5MB. Runs on your hardware.

SCROLL
02
THE NERVOUS SYSTEM

Conductor

You describe a goal. The system compiles its own execution plan — merging calls, parallelizing branches, handling cycles. What took 24 seconds and 6 LLM calls now takes 4 seconds and 2.

3-6×FASTER
THE MEMORY

Engram

Your agent learns, forgets, and evolves — like biological cognition. Three memory tiers with Ebbinghaus decay dissolve noise. Sensitive data is encrypted before it touches disk. Context windows never overflow.

3TIERS
THE SENSES

Librarian

The agent states what it needs — the system finds the right tool from 25,000+ integrations. No pre-loading, no context bloat. Scale-independent from 50 tools to 50,000.

25K+TOOLS
THE MUSCLE

Foreman

The expensive model reasons. The free local model executes. Every tool call costs $0. Your cloud LLM never touches schemas — it only sees results.

$0PER CALL
03

Everything you need.
Nothing you don't.

YOUR DATA

Never leaves your machine. AES-256-GCM. OS keychain. No cloud. No telemetry. Ever.

YOUR HARDWARE

~5MB. Pure Rust. Not Electron. Runs where you already work.

YOUR REACH

One agent, 11 platforms. Telegram to Slack to IRC. Simultaneously.

YOUR CHOICE

Any model. Any provider. Switch in seconds. Go fully offline with Ollama.

YOUR CODE

MIT licensed. Fully open. Fork it, audit it, own it.

Stop assembling. Start deciding.

Four protocols. ~5MB. Your hardware.
The post-automation era starts locally.

This is the post-automation era.

You used to build the machine.
Now you state the outcome.

Every other platform makes you the engineer — wiring nodes, mapping triggers, maintaining brittle logic. We published four original protocols that let the system assemble its own reasoning on the fly. Open source. Peer-reviewable. Running on your hardware.

ITHE NERVOUS SYSTEM+

Conductor

You describe the outcome. The system compiles the plan.

BEFORE24s / 6 calls
AFTER4s / 2 calls
IITHE MEMORY+

Engram

Your agent evolves. It doesn't just retrieve — it learns.

BEFOREFlat store
AFTERLiving graph
IIITHE SENSES+

Librarian

25,000 capabilities. Zero pre-loading. The agent finds what it needs.

BEFOREAll tools loaded
AFTERIntent-driven discovery
IVTHE MUSCLE+

Foreman

Intelligence is expensive. Execution should be free.

BEFORE$2.50/M tokens
AFTER$0 execution

Things no other platform
can do. Period.

01

Your agents argue until they get it right.

Conductor's Converge primitive enables cyclic reasoning — agents that debate, challenge assumptions, and self-correct. Every other platform forces one-way pipelines. Feedback loops aren't a feature. They're errors.

CAN'T DO THISn8n, Zapier, Make, Airflow, Prefect
02

Workflows that think in four dimensions.

Tesseract orchestrates parallel execution across sequence, parallelism, depth, and phase — converging at event horizons where context merges. You describe the outcome. The system finds the geometry of reasoning.

CAN'T DO THISEvery automation platform in existence
03

25,000 tools without loading a single one.

The Librarian searches by intent, not by index. Your agent discovers capabilities the way you discover — by knowing what it needs, not memorizing what's available. Scale-independent. 50 tools or 50,000, same speed.

CAN'T DO THISChatGPT, Claude, Gemini, any fixed-toolset agent
04

Cloud intelligence. Zero execution cost.

The Foreman separates thinking from doing. Your expensive model reasons. A free local model executes. You stop paying for muscle when you only need brain. MCP self-describing schemas mean zero training.

CAN'T DO THISZapier (per-task pricing), Make (per-op pricing)
05

Memory that forgets on purpose — and gets smarter.

Engram implements biological decay. Noise dissolves. Signal strengthens. If forgetting degrades quality, the cycle rolls back via transactional savepoint. 45% less storage, better retrieval. Your agent develops judgment, not just recall.

CAN'T DO THISEvery RAG system, every vector store
06

One mind. Eleven platforms. Simultaneously.

Same memory, same tools, same personality across Telegram, Discord, Slack, Matrix, IRC, Mattermost, Nextcloud, Nostr, Twitch, WebChat, and WhatsApp. Per-user session isolation. Your agent is omnipresent — not copy-pasted.

CAN'T DO THISAny single-channel chatbot or assistant

No marketing.
Just numbers.

WHEN YOU NEEDOPENPAWZEVERYONE ELSE
You describe a 10-step outcome4–8s, 2-3 LLM calls24s+, 6 LLM calls (you wire each node)
Agents need to self-correctNative — Converge primitiveImpossible (DAG-only, no cycles)
Reasoning across dimensionsNative — Tesseract primitiveDoesn't exist
You run 50 tool calls$0 (local Ollama)Cloud LLM rates per call
Agent discovers its own toolsIntent-driven (Librarian)All tools pre-loaded by you
Available capabilities25,000+ (auto-discovered)7,000 (Zapier) / 400 (n8n)
Agent memory evolves over time3-tier bio-inspired graphFlat vector store
Agent forgets noise, keeps signalEbbinghaus decay + FadeMemNone — infinite accumulation
Retrieval qualityCRAG 3-tier + query decompositionTop-K, hope for the best
Deploy to chat platforms11 simultaneously1 (maybe 2)
What you install~5MB (Tauri + Rust)200MB+ (Electron)
Go fully offlineFull capability (Ollama, $0)Cloud-dependent
Your private dataAES-256-GCM, never leaves devicePlaintext or cloud-managed
Who owns the codeYou — MIT, fully openProprietary / source-available

Built for people who

actually use AI.

Not just a chat window. Voice, research, visual workflows, browser automation, multi-agent orchestration — all in one ~5MB binary.

MIT LICENSE · OPEN SOURCE · PEER-REVIEWABLE

Read the research.
Run the code.
Prove us wrong.

We didn't build a product. We published four protocols that change how AI agents reason, remember, discover, and execute. Conductor, Engram, Librarian, Foreman — all implemented in Rust, all MIT licensed, all running on your hardware. ~5MB. 86K lines. 602 tests. Zero CVEs.

LINES OF CODE
TESTS
0CVEs
PROTOCOLS