How it works

The short version: an event-driven AI agent running against a shared plan, with workspaces, tasks, and decisions as its shared state. The long version is below — a high-level tour of the concepts, not a blueprint for rebuilding it.

The Request Flow

What actually happens when you send a message. This is the real pipeline — not a simplified marketing diagram.

persistenqueue
YouBrowser
APIAuth + validate
MessagesPersist
Event QueuePrioritized
WorkerSerialize per tenant
OrchestratorReason → Act → Repeat
ContextPlan, tasks, history
LLMClaude / GPT
ToolsBuilt-in & external
ResearchWeb search & extract
MCPExternal tools
TriggersSelf-scheduled
ResultsActivity + response
StateTasks, plans, ...
StripeUsage tracking
EmailBriefs & alerts
UI UpdatesLive refresh

The Agent Loop

Not a single API call. The orchestrator runs in a loop — reasoning, calling tools, and checking whether it should keep going or respond, with a built-in safety cap to prevent runaways.

How a turn works

The orchestrator loads the full business context — your plan, active tasks, recent decisions, company settings, custom instructions, and chat history. It hands all of that to the LLM along with the full tool registry. The model reasons about what to do, picks tools to call, gets results back, and decides: keep going or respond. One message from you can trigger a chain of tool calls — research, task creation, plan updates — all in a single turn.

Tool categories

SystemTasksDecisionsCommunicationTriggersWorkspaceFilesResearchDashboardMCP (external)

Each category is a module of typed tool definitions. The registry composes them at startup and serves schemas to the LLM. MCP tools come from connected external servers — the agent discovers what's available and uses them alongside built-in tools.

Interrupts & safety

If a new message arrives while the agent is mid-turn, the orchestrator detects it and gracefully stops — the current work is saved, and a continuation event is queued so nothing gets lost. There's also budget checks (credits mode pauses the agent if you run low), emergency stop, and a hard cap on consecutive tool failures.

Event-Driven Everything

The system is asynchronous by design. User actions, triggers, and internal signals all flow through the same event queue.

When you send a message, it doesn't hit the AI directly. It gets written to a persistent event queue with a priority. A background worker picks events up, serializes work per tenant so parallel requests don't collide, and runs the orchestrator against the primary one. The same pipeline handles chat messages, trigger firings, task updates, and system signals — same queue, same worker, same orchestrator.

Event types
  • User messages
  • Trigger firings
  • Task updates
  • Decision resolutions
  • System signals
Worker loops
  • Continuous event processing
  • Periodic trigger evaluation
  • Regular brief generation
  • Stale event recovery
  • Per-tenant serialization

Tech Stack

A compact, modern stack chosen for fast iteration: a single web application serves the UI and the API, with background workers handling long-running AI work against the same datastore.

Frontend
Modern React frameworkServer & client components
Design systemCustom tokens, typography, motion
Real-time UIStreaming responses, live activity
Backend
Typed API layerSchema-validated inputs & outputs
Relational datastorePrimary source of truth
Per-tenant isolationCompany-scoped access everywhere
AI & Research
Anthropic ClaudePrimary LLM provider
OpenAIAlternative LLM provider
Web researchLive search & page extraction
MCP protocolExternal tool connectivity
Platform
StripeBilling & credits
Background processingEvent-driven automation
Continuous deliveryAutomated releases & changelog

Triggers & Proactivity

The agent doesn't just respond — it can schedule its own follow-ups. Three types of triggers, all evaluated by the background worker:

Time-based

"Check back on this in 3 days" — fires at the specified time, queues a trigger event, and the agent picks up where it left off.

Condition-based

"Alert me when task completion drops below 70%" — the worker evaluates conditions against live data every cycle.

Inactivity-based

"Nudge if nobody's touched this project in a week" — compares timestamps and fires when things go quiet.