Chat
You type a message. Solmyr reads it, considers your entire business context, picks from 50+ tools, executes a chain of actions, and comes back with an answer. Sometimes that answer is a paragraph. Sometimes it's "done — I created the workspace, added 12 tasks, and scheduled a follow-up for next Tuesday."
It's a slide-in panel, not a separate page. You can open it from anywhere in the app, talk to Solmyr while looking at your dashboard or a workspace, and close it when you're done. The conversation is shared across your whole team — everyone sees the same thread, the same context, the same history.
This is not a chatbot
Most AI chat products work like this: you ask a question, you get an answer, end of story. Solmyr works like this: you ask a question, the AI considers your business plan, your open tasks, your constraints, your budget, your team size, the research it did last week, and the decision you deferred three days ago — and then it answers. And the answer might not just be text. It might be actions.
"Look into our competitors" doesn't get you a generic paragraph about competitive analysis. It triggers an actual research session — the AI browses the web, reads pages, synthesizes findings, creates a workspace with the results, and comes back with a structured report. One message, multiple tools, real output.
And here's the part that surprised even us: session 50 is genuinely different from session 1. The AI has accumulated weeks of context — your decisions, your progress, your evolving strategy. It doesn't start from zero. It starts from everything it knows about you.
What happens when you hit send
The journey from your message to the AI's response isn't instant, and it's not simple. Here's the full chain.
Your message is stored
The message, any attached files, and mention references get saved to the database. If you mentioned a workspace or task, the system resolves those into full context snapshots — block types, task details, statuses — so the AI gets a complete picture, not just a name.
An event is emitted
Your message queues an event for the agent to pick up. This is the same event system that powers triggers, briefs, and background automation. Chat isn't a special case — it's part of the same infrastructure.
The context is assembled
The orchestrator builds a context package: your business profile, constraints, plan, recent conversation history, the current state of mentioned workspaces and tasks, custom instructions, and any attached documents or images. This is what makes every response context-aware.
The agent loop begins
Solmyr calls the LLM with the full context plus all available tools. The model reasons about what to do, optionally calls tools (create tasks, do research, update plans), looks at the results, and decides whether to continue or respond. The loop runs for as many turns as a complex request needs, with a safety cap to prevent runaways.
You see the response
While the agent works, you see live activity updates — "Researching," "Creating task," "Updating plan." When the loop finishes, the final message appears in the chat. The whole thing takes anywhere from 2 seconds (simple question) to a couple of minutes (deep research + multi-step planning).
Mentions
@Workspaces
Type @ and start typing a workspace name. When you mention a workspace, Solmyr gets a snapshot of its entire structure — every block type, every title, the full layout. So when you say "update the competitor analysis in @Market Research," the AI knows exactly what's in that workspace and how to work with it.
@Tasks
Same idea, but for tasks. Mentioning a task pulls in its title, type, priority, status, assignee, and description. So "what's the status on @Launch landing page?" gives the AI full context without you having to explain which task you mean or copy-paste details from a board.
Message Solmyr... (@ to mention)
Enter to send · Shift+Enter for new line
Attachments
You can give Solmyr files. Not just tell it about them — actually hand them over. Paste, drag, or pick from a file dialog.
Images
PNG, JPG, GIF, WebP · 5MB max
Paste from clipboard, drag onto the chat panel, or use the attach button. Images are sent as multimodal content — the AI actually sees them, not just a filename. Useful for screenshots, mockups, or photos of whiteboards you swore you'd transcribe later.
Documents
TXT, MD, CSV, JSON, PDF · 10MB max
Text-based files are read and included in the conversation context. PDFs are converted to text. CSV and JSON files become structured data the AI can analyze. Drop a business plan, a financial spreadsheet, or meeting notes — Solmyr reads them and factors them into its response.
50+ tools at its disposal
When Solmyr responds to your message, it doesn't just generate text. It has access to a full toolkit — and it decides which tools to use based on what you're asking for. A single message can trigger a chain of tool calls: research a topic, create a workspace, add tasks to a board, set a reminder to follow up. Here's what's in the toolbox:
Create, update, list, review, comment on tasks. Pick the next task. Manage the full task lifecycle.
Web search, browse URLs, synthesize findings. Full research sessions with multiple rounds.
Create spaces, add blocks (boards, tables, calendars, metrics, checklists), manage items, reorder content.
Read and update the master plan, roadmap, business context, custom instructions. Get usage stats.
Set time-based, condition-based, and inactivity triggers. Schedule its own follow-ups and reminders.
Post announcements, share thoughts, create briefings. Set the morning brief content.
Plus any custom tools you've connected via MCP integrations. The agent discovers those at runtime and weaves them into its reasoning automatically.
While Solmyr works
Complex requests don't resolve instantly. While the agent is working, you see a live activity feed in the chat — a pulsing indicator with human-readable labels for what it's doing: "Researching," "Creating task," "Updating plan," "Browsing URL."
Consecutive tool calls get grouped into expandable steps, so a 10-tool-call chain doesn't flood the chat with noise. You see the summary, and you can expand to see each step if you're curious. Or you can just wait for the final message and trust that the process worked. Either approach is valid.
If something goes wrong — credits run out, the API key is bad, a tool chain fails — you get an error bubble with a clear message and, where possible, a direct link to fix the issue. Not a cryptic stack trace. An actual helpful error, because we got tired of unhelpful ones ourselves.
Activity labels
Guardrails
Per-message iteration cap to prevent runaways
Budget tracking per conversation
Automatic pause if spending exceeds limits
Graceful stop when tool calls keep failing
One conversation, whole team
The chat isn't per-user. It's per-company. Everyone on the team sees the same thread, the same messages, the same AI responses. When your co-founder asks Solmyr to research a market at 2am, you wake up and see the full conversation — question, research results, created workspace, everything.
This was a deliberate design choice. We tried per-user threads and it created a fragmented experience — the AI would give contradictory advice to different people because it couldn't see the full picture. Shared context means shared understanding. It's like having a team Slack channel where one member happens to be an AI with perfect memory and no opinions about your music taste.
Need a fresh start? The "New chat" button clears the conversation history. This is team-visible too, so maybe don't do it during someone else's research session.
Getting more out of the conversation
Be specific about outcomes, not process
"Research competitors in the sustainable packaging space and create a comparison workspace" works better than "can you help me understand my competition?" The AI is capable of multi-step execution — let it figure out the how.
Use mentions liberally
Mentioning a workspace or task gives the AI surgical context. Instead of "update that thing from last week," try "update the revenue projections in @Financial Model." Specificity costs you 2 seconds and saves the AI from guessing.
Attach instead of describing
If you have a document, spreadsheet, or screenshot — just drop it in. The AI can read it directly. Describing a 20-page business plan in chat is an exercise in futility when you could paste the PDF.
Don't micromanage the tools
You don't need to say "use the create_task tool to make a task." Just say what you want done. The agent figures out which tools to call. That's literally its job.
Check the activity feed
If the response seems off, expand the activity steps to see what the AI actually did. Sometimes the tool chain reveals a misunderstanding you can correct in your next message rather than starting over.
Next up
Inbox
Where decisions, tasks, and alerts land when they need your attention.