// Message flow — developer-focused walkthrough of how an inbound message
// becomes an agent reply. Channel-agnostic.
function Flow() {
  const shell = useShell();
  const tocItems = [
    { id: 'overview', label: 'Overview' },
    { id: 'path', label: 'The path' },
    { id: 'stages', label: 'Stage by stage' },
    { id: 'invariants', label: 'Invariants preserved' },
    { id: 'example', label: 'Example walkthrough' },
    { id: 'example-setup', label: 'Setup', level: 3 },
    { id: 'example-turns', label: 'Turn by turn', level: 3 },
    { id: 'example-shows', label: 'What it shows', level: 3 },
    { id: 'takeaways', label: 'Takeaways' },
  ];

  const ARROW = '\u25BC';

  return (
    <div className="app">
      <Topbar section="guides" theme={shell.theme} setTheme={shell.setTheme} onSearch={() => shell.setSearchOpen(true)} onMenuToggle={() => shell.setMobileMenuOpen(true)} />
      <div className="main">
        <Sidebar activeId="flow" mobileOpen={shell.mobileMenuOpen} onMobileClose={() => shell.setMobileMenuOpen(false)} />
        <article className="content">
          <div className="crumbs">
            <a href="index.html">Docs</a>
            <span className="sep">/</span>
            <a href="#">Concepts</a>
            <span className="sep">/</span>
            <span>Message flow</span>
          </div>

          <div className="eyebrow">Concepts · 8 min read</div>
          <h1 className="h1">How a message <em>flows.</em></h1>
          <p className="lede">
            Every inbound message — typed in the web UI, arriving from a chat app or channel surface,
            or raised by a scheduled cron — takes the same path. The channel and the agent differ;
            the flow does not. This page walks a single request from the HTTP boundary to the final
            streamed token, with a worked example at the end.
          </p>

          <h2 id="overview" className="h2">Overview</h2>
          <p>
            An inbound message is authenticated, routed to the right agent + session, enriched with
            tenant-specific context from Markdown files, then handed to a bounded agent loop that
            streams tokens and executes tools until the model emits no more tool calls. The return
            path is <a className="inline" href="https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events" target="_blank" rel="noreferrer">Server-Sent Events</a> —
            so the client sees text, tool calls, and tool results as they happen.
          </p>

          <h2 id="path" className="h2">The path</h2>
          <p>A single request, top to bottom:</p>

          <pre className="codeblock" style={{padding:'18px 20px', fontSize:12.5, lineHeight:1.55, overflow:'auto'}}>
{`Client  (web UI, chat app / channel surface, cron trigger)
   ${ARROW}  POST /api/{endpoint}   X-API-Key: <tenant key>

Auth middleware
   ${ARROW}  X-API-Key  ${ARROW}  platform.db  ${ARROW}  accountId

Route handler
   ${ARROW}  resolveAgent(accountId, channel, sender)  ${ARROW}  agentId
   ${ARROW}  makeSessionKey(agentId, channel, sender)  ${ARROW}  stable session id

Agent runtime
   ${ARROW}  buildSystemPrompt(accountId, agentType)
         reads IDENTITY, SOUL, USER, AGENTS, TOOLS, MEMORY, BOOTSTRAP*
   ${ARROW}  buildDynamicContext(accountId)
         prepends HEARTBEAT + timestamp to the message
   ${ARROW}  tool handlers:  role-filtered registry (read/write/web_search/bash/cron/message/load_skill/...)

Brain runner loop  (up to maxTurns)
   ${ARROW}  load session JSON  ${ARROW}  JIT compaction  ${ARROW}  LLM stream
   ${ARROW}  if no tool calls    ${ARROW}  emit "done"
   ${ARROW}  if tool calls       ${ARROW}  run handlers  ${ARROW}  append results  ${ARROW}  loop

Stream back:  text_delta, tool_call_*, tool_result, done`}
          </pre>

          <Callout type="note" title="One endpoint per agent binding, not per channel">
            <code>POST /api/admin</code> is the web-UI entry for the owner-facing agent. Chat apps and
            channel surfaces use their own endpoints, but every endpoint funnels into the same
            <code> resolveAgent </code>+<code> makeSessionKey </code> pair. The only per-channel code
            lives in adapters, not in the runtime.
          </Callout>

          <h2 id="stages" className="h2">Stage by stage</h2>
          <p>Every stage is a small module with one responsibility. You can replace any stage without touching the others.</p>

          <div style={{overflow:'auto', margin:'16px 0 24px'}}>
          <table className="doc-table">
            <thead><tr><th>Stage</th><th>Responsibility</th><th>Inputs</th><th>Outputs</th></tr></thead>
            <tbody>
              <tr><td><strong>Client</strong></td><td>Send message; persist <code>contact_id</code>; render SSE events</td><td>user input, stored tenant API key</td><td>HTTP POST with SSE response</td></tr>
              <tr><td><strong>Auth</strong></td><td>Resolve which tenant is calling</td><td><code>X-API-Key</code> header</td><td><code>accountId</code> in request context</td></tr>
              <tr><td><strong>Routing</strong></td><td>Pick the right agent + session</td><td><code>(accountId, channel, sender)</code></td><td><code>agentId</code>, session id</td></tr>
              <tr><td><strong>Context builder</strong></td><td>Assemble system prompt</td><td>tenant <code>.md</code> files + skills index</td><td>system prompt string</td></tr>
              <tr><td><strong>Dynamic context</strong></td><td>Inject time-sensitive state</td><td><code>HEARTBEAT.md</code>, timestamp</td><td>prepended to user message</td></tr>
              <tr><td><strong>Brain runner</strong></td><td>Run the agent loop</td><td>system prompt, history, tools</td><td>stream of events</td></tr>
              <tr><td><strong>Tool handlers</strong></td><td>Execute any tool in the agent's role-filtered registry</td><td>tool call input</td><td>tool result string</td></tr>
              <tr><td><strong>Session store</strong></td><td>Persist conversation</td><td>messages array</td><td>JSON file + SQLite index row</td></tr>
            </tbody>
          </table>
          </div>

          <h2 id="invariants" className="h2">Invariants preserved</h2>
          <p>Six properties hold for every request regardless of channel, agent, or tenant:</p>

          <div style={{overflow:'auto', margin:'16px 0 24px'}}>
          <table className="doc-table">
            <thead><tr><th>Invariant</th><th>How it holds</th><th>Where to look</th></tr></thead>
            <tbody>
              <tr><td><strong>Tenant isolation</strong></td><td><code>accountId</code> derived from the key selects the DB handle and the filesystem root</td><td>Auth middleware</td></tr>
              <tr><td><strong>Filesystem is truth</strong></td><td>Prompt is built from <code>.md</code> files each turn; history is a JSON file; SQLite is a rebuildable index</td><td>Context builder + brain runner</td></tr>
              <tr><td><strong>Role-gated tool registry</strong></td><td>LLM sees ~28 first-party tools grouped by domain, filtered by the chosen agent's allowlist; sensitive tools (<code>bash</code>, <code>exec</code>, <code>write</code>, <code>cron</code>, <code>subagents</code>) are admin-only</td><td>Tool definitions</td></tr>
              <tr><td><strong>Channel-agnostic routing</strong></td><td>Web, chat apps, channel surfaces, and cron all go through the same <code>resolveAgent</code> + <code>makeSessionKey</code></td><td>Routing</td></tr>
              <tr><td><strong>Stable session identity</strong></td><td><code>hash(agentId + channel + sender)</code> — no DB row is required to identify a session</td><td>Session key</td></tr>
              <tr><td><strong>Bounded loops</strong></td><td><code>maxTurns</code> caps runaway tool-calling so a single request cannot drain the budget</td><td>Brain runner</td></tr>
            </tbody>
          </table>
          </div>

          <h2 id="example" className="h2">Example walkthrough</h2>
          <p>A worked trace of a realistic prompt. The user types:</p>

          <CodeBlock
            tabs={[
              { label: 'user', raw: `can you send me the sales report today`,
                code: `<span class="tok-com">user:</span> <span class="tok-str">"can you send me the sales report today"</span>` },
            ]}
          />

          <h3 id="example-setup" className="h3">Setup assumed on the tenant side</h3>
          <ul>
            <li>A skill exists at <code>/data/tenants/&#123;id&#125;/skills/sales-report/SKILL.md</code> describing how to produce the report.</li>
            <li>A <code>tunder sales</code> subcommand is registered that queries the tenant's connected data source.</li>
            <li>One or more channel surfaces are bound so the agent can deliver — the web UI, chat apps, or any other inbound/outbound surface the tenant has configured.</li>
          </ul>

          <h3 id="example-turns" className="h3">Turn by turn</h3>
          <div style={{overflow:'auto', margin:'16px 0 24px'}}>
          <table className="doc-table">
            <thead><tr><th style={{width:64}}>Turn</th><th>LLM emits</th><th>Runtime does</th><th>Stream events</th></tr></thead>
            <tbody>
              <tr>
                <td><strong>1</strong></td>
                <td><code>tool_call: load_skill(&#123; name: "sales-report" &#125;)</code> — the skills index in the system prompt had a line like <code>sales-report: Generate and deliver today's sales summary</code>, so the model pulls the full procedure</td>
                <td>Reads <code>/data/tenants/&#123;id&#125;/skills/sales-report/SKILL.md</code> and returns its body as the tool result</td>
                <td><code>tool_call_start</code> &rarr; <code>tool_result</code></td>
              </tr>
              <tr>
                <td><strong>2</strong></td>
                <td><code>tool_call: bash(&#123; command: "tunder sales summary --date today --format json" &#125;)</code> — skill told it which command to use</td>
                <td>Spawns Docker sandbox with <code>cwd=/data/tenants/&#123;id&#125;/</code>, runs the <code>tunder</code> dispatcher, which queries the tenant's sales data source and returns JSON on stdout</td>
                <td><code>tool_call_start</code> &rarr; <code>tool_result</code> (JSON payload)</td>
              </tr>
              <tr>
                <td><strong>3</strong></td>
                <td><code>tool_call: bash(&#123; command: "tunder render --template daily-sales --data - &lt;&lt; EOF ... EOF" &#125;)</code> — skill recommended formatting via a template</td>
                <td>Template engine reads <code>/data/tenants/&#123;id&#125;/templates/daily-sales.md</code>, injects the JSON, returns Markdown</td>
                <td><code>tool_call_start</code> &rarr; <code>tool_result</code> (rendered report)</td>
              </tr>
              <tr>
                <td><strong>4</strong></td>
                <td><code>tool_call: bash(&#123; command: "tunder send --channel &lt;owner_preferred&gt; --to owner --body '...'" &#125;)</code> — skill said "deliver over the owner's preferred channel surface"</td>
                <td>Looks up the owner's channel binding, dispatches via the registered adapter for that surface, returns <code>&#123; status: "sent", message_id: "..." &#125;</code></td>
                <td><code>tool_call_start</code> &rarr; <code>tool_result</code></td>
              </tr>
              <tr>
                <td><strong>5</strong></td>
                <td><code>text_delta: "Sent today's sales report — $42,180 across 37 orders, up 12% vs. yesterday."</code> — no more tool calls</td>
                <td>Runner detects zero tool calls, emits <code>done</code>, persists session</td>
                <td><code>text_delta*</code> &rarr; <code>done</code></td>
              </tr>
            </tbody>
          </table>
          </div>

          <Callout type="info" title="Why the model knew which commands to call">
            The LLM did not guess command names. It read them out of <code>SKILL.md</code> in turn 1
            and then invoked them in turns 2–4. Without the skill, the model would either have to
            probe with <code>bash "tunder --help"</code> or refuse. Skills are the contract between
            the LLM and your domain.
          </Callout>

          <h3 id="example-shows" className="h3">What this example shows about the architecture</h3>
          <div style={{overflow:'auto', margin:'16px 0 24px'}}>
          <table className="doc-table">
            <thead><tr><th>Claim</th><th>Evidence in this trace</th></tr></thead>
            <tbody>
              <tr><td><strong>Curated registry covers arbitrary workflows</strong></td><td>The 4-step pipeline (load &rarr; query &rarr; render &rarr; deliver) used <code>load_skill</code>, <code>bash</code>/<code>exec</code>, and <code>message</code> from the admin agent's registry. New capabilities land as new tool files or per-tenant plugin registrations; the LLM contract (tool names + schemas) is the stable interface.</td></tr>
              <tr><td><strong>Skills are procedural memory</strong></td><td>One <code>SKILL.md</code> file drove four tool calls. Updating the skill changes the procedure with no backend deploy.</td></tr>
              <tr><td><strong>Filesystem is truth</strong></td><td>Skill body, template, and delivery bindings were all read from <code>/data/tenants/&#123;id&#125;/</code> on this single request. No migration, no schema lookup.</td></tr>
              <tr><td><strong>Channel-agnostic delivery</strong></td><td>The agent dispatched <em>to</em> the owner's preferred channel surface from a request that came <em>from</em> a different surface. Same <code>resolveAgent</code> / <code>makeSessionKey</code>, different outbound adapter.</td></tr>
              <tr><td><strong>Bounded loops</strong></td><td>Four tool turns used out of <code>maxTurns=20</code>. The loop is the control flow.</td></tr>
              <tr><td><strong>Session persists</strong></td><td>Turn 5's reply is saved alongside turns 1–4 in the session JSON. The next message from the user continues the same thread.</td></tr>
            </tbody>
          </table>
          </div>

          <h2 id="takeaways" className="h2">Takeaways</h2>
          <ol>
            <li>
              <strong>New capability = new tunder subcommand + a skill that mentions it.</strong> No
              API changes, no tool-schema changes, no redeploy of the LLM contract.
            </li>
            <li>
              <strong>The LLM doesn't know SQL, channel APIs, or templating libraries.</strong> It
              knows shell, and it knows that skills document the right invocations.
            </li>
            <li>
              <strong>Latency is dominated by LLM turns, not tool calls.</strong> Five turns ≈ five
              LLM round-trips. Tool execution is sub-second against local SQLite and HTTP.
            </li>
            <li>
              <strong>Every boundary is observable.</strong> SSE events expose the agent's reasoning
              trace to the client in real time — a debugger for free.
            </li>
          </ol>

          <Feedback />
          <PageFoot next={{ label: 'API reference', href: 'api-reference.html' }} />
        </article>
        <TOC items={tocItems} />
      </div>
      <SearchOverlay open={shell.searchOpen} onClose={() => shell.setSearchOpen(false)} />
      <TweaksPanel visible={shell.tweaksVisible} theme={shell.theme} setTheme={shell.setTheme} />
    </div>
  );
}
ReactDOM.createRoot(document.getElementById('root')).render(<Flow />);
