How it works
The short version: an event-driven AI agent running against a shared plan, with workspaces, tasks, and decisions as its shared state. The long version is below — a high-level tour of the concepts, not a blueprint for rebuilding it.
The Request Flow
What actually happens when you send a message. This is the real pipeline — not a simplified marketing diagram.
The Agent Loop
Not a single API call. The orchestrator runs in a loop — reasoning, calling tools, and checking whether it should keep going or respond, with a built-in safety cap to prevent runaways.
How a turn works
The orchestrator loads the full business context — your plan, active tasks, recent decisions, company settings, custom instructions, and chat history. It hands all of that to the LLM along with the full tool registry. The model reasons about what to do, picks tools to call, gets results back, and decides: keep going or respond. One message from you can trigger a chain of tool calls — research, task creation, plan updates — all in a single turn.
Tool categories
Each category is a module of typed tool definitions. The registry composes them at startup and serves schemas to the LLM. MCP tools come from connected external servers — the agent discovers what's available and uses them alongside built-in tools.
Interrupts & safety
If a new message arrives while the agent is mid-turn, the orchestrator detects it and gracefully stops — the current work is saved, and a continuation event is queued so nothing gets lost. There's also budget checks (credits mode pauses the agent if you run low), emergency stop, and a hard cap on consecutive tool failures.
Event-Driven Everything
The system is asynchronous by design. User actions, triggers, and internal signals all flow through the same event queue.
When you send a message, it doesn't hit the AI directly. It gets written to a persistent event queue with a priority. A background worker picks events up, serializes work per tenant so parallel requests don't collide, and runs the orchestrator against the primary one. The same pipeline handles chat messages, trigger firings, task updates, and system signals — same queue, same worker, same orchestrator.
- User messages
- Trigger firings
- Task updates
- Decision resolutions
- System signals
- Continuous event processing
- Periodic trigger evaluation
- Regular brief generation
- Stale event recovery
- Per-tenant serialization
Tech Stack
A compact, modern stack chosen for fast iteration: a single web application serves the UI and the API, with background workers handling long-running AI work against the same datastore.
Triggers & Proactivity
The agent doesn't just respond — it can schedule its own follow-ups. Three types of triggers, all evaluated by the background worker:
"Check back on this in 3 days" — fires at the specified time, queues a trigger event, and the agent picks up where it left off.
"Alert me when task completion drops below 70%" — the worker evaluates conditions against live data every cycle.
"Nudge if nobody's touched this project in a week" — compares timestamps and fires when things go quiet.