Agentic AI Atlasby a5c.ai
OverviewWikiGraphFor AgentsEdgesSearchWorkspace
/
GitHubDocsDiscord
iiRecord
Agentic AI Atlas · Hybrid Agentic Systems
page:docs-articles-hybrid-agentic-systems-two-loops-control-plane-guidea5c.ai
Search record views/
Record · tabs

Available views

II.Record viewspp. 1 - 1
overviewarticlejsongraph
II.
Page JSON

page:docs-articles-hybrid-agentic-systems-two-loops-control-plane-guide

Structured · live

Hybrid Agentic Systems json

Inspect the normalized record payload exactly as the atlas UI reads it.

File · wiki/docs/articles/hybrid-agentic-systems-two-loops-control-plane-guide.mdCluster · wiki
Record JSON
{
  "id": "page:docs-articles-hybrid-agentic-systems-two-loops-control-plane-guide",
  "_kind": "Page",
  "_file": "wiki/docs/articles/hybrid-agentic-systems-two-loops-control-plane-guide.md",
  "_cluster": "wiki",
  "attributes": {
    "nodeKind": "Page",
    "sourcePath": "docs/articles/Hybrid Agentic Systems_ Two-loops Control Plane Guide.md",
    "sourceKind": "repo-docs",
    "title": "Hybrid Agentic Systems",
    "displayName": "Hybrid Agentic Systems",
    "slug": "docs/articles/hybrid-agentic-systems-two-loops-control-plane-guide",
    "articlePath": "wiki/docs/articles/Hybrid Agentic Systems_ Two-loops Control Plane Guide.md",
    "article": "\n# Hybrid Agentic Systems\n## Overview\n\nThis guide is a conceptual framework for building hybrid agentic systems where:\n\n- a **symbolic, code-defined orchestrator** governs progression, journaling, and phase boundaries\n- an **LLM-powered harness** performs adaptive work with tools (planning, command execution, iteration)\n\nsymbolic logic is shared and can be used as:\n\n- **orchestrator process rules** (the canonical source)\n- **symbolic tools** callable by the harness\n- **symbolic tasks** invoked by orchestration\n\nThe system’s power comes from **stack-depth interleaving**: orchestration steps can contain harness work sessions, and those sessions can consult (or trigger) symbolic checks mid-flight.\n\nThe goal is to help you decide what execution logic belongs where, how to design the integration points, and how to enforce **guardrails + quality gates** without sacrificing capability.\n\n---\n\n## 1) The core building blocks\n\nA hybrid agentic system is built from **two primary components**, plus a shared layer of **symbolic capabilities** that can be invoked from either side.\n\n### A) **Symbolic Orchestrator (Process Engine)**\n\nThe orchestrator is a **code-defined process** that enforces:\n\n- the system’s **ground truth state** and progression rules\n- invariants (**“this must never happen”**)\n- budgets (**time/cost/tool limits**)\n- permissions (**what actions are allowed**)\n- quality gates (**what must be proven before moving forward**)\n- journaling (**what happened, in what order**)\n- time travel + forking (**replay a past point, branch a new run**)\n\nIt is responsible for making execution **dependable**.\n\n### B) **Agent Harness (LLM Runtime)**\n\nThe harness is not “just an LLM call.” Modern harnesses (like coding agents) often include:\n\n- iterative planning and re-planning\n- tool calling (files, terminal, search, code execution)\n- command execution + parsing results\n- incremental fixes until checks pass\n- producing structured artifacts (plans, diffs, summaries)\n- multi-step reasoning with constraints\n- sub-agents and delegation inside the harness\n\nIt is responsible for solving **fuzzy parts of work** and adapting to real-world feedback.\n\n### C) **Symbolic logic surfaces (shared, callable capabilities)**\n\nSymbolic logic is not only “inside orchestration.” In a strong design it appears in multiple places, all consistent with the same rules:\n\n- **inside the orchestrator process** (stage transitions, invariants, gates, budgets)\n- **as symbolic tools callable by the harness** (policy checks, gate evaluation, scope rules, deterministic transforms)\n- **as symbolic tasks callable by orchestration** (validators, analyzers, schedulers, reducers, diff scanners)\n\nThis matters because the \"**symbolic vs agentic**\" split is not about location. It is about **who is responsible for correctness** and **how results are proven**.\n\n---\n\n## 2) The two loops (and why both are needed)\n\n### **Loop 1: Orchestration Loop (Symbolic)**\n\nA process stepper that progresses a run through explicit stages.\n\nTypical cycle:\n\n1. Reconstruct **“what is true”** from the journal\n2. Determine what stage the run is in\n3. Check gates, constraints, budgets\n4. Choose the next allowed transition\n5. Emit the next effect (or wait)\n6. Record results back into the journal\n\nThis loop is about **control, safety, repeatability, and traceability**.\n\n### **Loop 2: Agentic Loop (Harness)**\n\nA tool-using reasoning loop that can iterate until it reaches a local objective.\n\nTypical cycle:\n\n1. Read current objective + constraints\n2. Decide what evidence is needed\n3. Call tools, inspect results\n4. Update plan or actions\n5. Produce an output (patch, plan, answer, report)\n\nThis loop is about **solving the task**, especially when information is incomplete and the path is uncertain.\n\n---\n\n## 3) The critical point: execution logic can live in either loop\n\nA common mistake is to assume:\n\n- “the agent thinks and proposes”\n- “the orchestrator executes”\n\nThat’s not the right mental model.\n\nIn real systems:\n\n- the harness can execute meaningful work (run commands, edit files, iterate)\n- the orchestrator can execute effects too (dispatch steps, run validators, schedule retries)\n\nThe real craft is deciding:\n\n- **Which execution decisions must be symbolic, and which can be agentic?**\n\nThe system becomes strong when each loop is used for what it’s best at.\n\n---\n\n## 4) A practical allocation guide: what goes where?\n\nThe design challenge is not “LLM vs orchestrator.” It is deciding which parts of execution are **deterministic/symbolic** and which parts are **adaptive/agentic**.\n\nSymbolic logic can show up in multiple places:\n\n- process rules inside the orchestrator (stage transitions, budgets, gates)\n- symbolic tools the harness can call (policy checks, gate evaluation)\n- symbolic tasks the orchestrator can run (validators, analyzers, reducers)\n\n### **Put it in Symbolic Logic (deterministic capabilities) when…**\n\nThese are decisions that must be **stable, enforceable, and auditable**:\n\n- Safety and permissions (what actions are allowed)\n- Budgets and hard limits (time, money, number of tool calls)\n- State transitions (what stage you’re in)\n- Concurrency rules (what can run in parallel)\n- Retry/timeout policy (what happens when tools fail)\n- Idempotency and deduplication (avoid double execution)\n- Quality gates (what proof is required to progress)\n- Compliance requirements and audit logging\n\nWhere it lives:\n\n- as **orchestrator process rules** (canonical)\n- and/or as **symbolic tools/tasks** (so both loops can consult the same truth)\n\n### **Put it in the Agent Harness (adaptive capabilities) when…**\n\nThese are decisions that benefit from **flexible reasoning**:\n\n- Interpreting ambiguous instructions\n- Choosing a likely-good approach under uncertainty\n- Searching for relevant files and context\n- Drafting code, documentation, or analyses\n- Debugging by iterating against tool results\n- Summarizing and compressing evidence\n- Proposing candidate solutions and tradeoffs\n\n### **The middle zone (where architecture matters)**\n\nMany tasks are mixed. Examples:\n\n- “Fix the failing tests”\n- “Refactor safely”\n- “Ship feature X within standards”\n\nIn these cases:\n\n- **Symbolic logic** should define the envelope (constraints + gates + budgets)\n- **The harness** should do the exploration inside that envelope\n- Both sides should be able to invoke symbolic rules via tools/tasks so nothing is “guesswork”\n\n---\n\n## 5) Interleaving (stack depth, not time)\n\nInterleaving means **nesting**: within a single top-level run, the orchestrator can enter a harness work session, and inside that session the harness can invoke symbolic tools (and sometimes spawn smaller sub-sessions). In the other direction, the orchestrator can call symbolic tasks (validators, analyzers) as part of the same step.\n\n### One clear nested flow\n\n1. Orchestrator frames a step: objective, constraints, budgets, required evidence\n2. Orchestrator may call symbolic tasks to prepare or constrain the work (compute allowed scope, run analyzers, preflight checks)\n3. Harness runs a bounded work session: explores, edits, runs commands/tools, iterates\n4. During the session, the harness calls symbolic tools to stay aligned with rules:\n   - “Is this change allowed?”\n   - “What checks are required in this stage?”\n   - “Does this evidence satisfy the gate?”\n5. Harness returns **artifact + evidence + status**\n6. Orchestrator validates gates (often via symbolic tasks) and decides: **advance, retry, fork, or request approval**\n\n### 4 common nesting patterns\n\n- **Orchestrator → Harness (bounded work)**  \n  Orchestrator delegates adaptive work (implement, fix, refactor) within a strict envelope.\n\n- **Harness → Symbolic Tool (rule consultation)**  \n  Harness consults deterministic logic instead of guessing policies and constraints.\n\n- **Harness → Symbolic Tool → Harness (work → check → repair)**  \n  Harness checks gates/policies mid-session and immediately performs targeted repairs.\n\n- **Orchestrator → Symbolic Task → Orchestrator (evidence verification)**  \n  Orchestrator invokes validators/analyzers to produce pass/fail evidence before proceeding.\n\n### Rules that keep nesting safe and understandable\n\n- Every harness session is **bounded** (budget + stop condition)\n- Symbolic checks return **explicit outcomes** (pass/fail + reasons)\n- Harness outputs are **structured** (artifact + evidence + status)\n- Limit delegation depth: prefer small focused sessions over one huge autonomous run\n\n---\n\n## 6) Guardrails: the system’s safety and containment layer\n\nGuardrails are not a single feature. They are a **layered approach**.\n\n### A) **Capability guardrails (what actions are possible)**\n\n- tool allowlists (only these tools exist)\n- path/working-dir restrictions (only operate inside certain folders)\n- network restrictions (no network, or allow only specific hosts)\n- read-only vs write permissions\n- destructive actions require explicit confirmation\n\n### B) **Budget guardrails (how far actions can go)**\n\n- max tool calls per step\n- max wall-clock time per run/phase\n- max token spend per phase\n- rate limits for expensive operations\n\n### C) **Policy guardrails (what actions are allowed)**\n\n- “never exfiltrate secrets”\n- “never modify prod directly”\n- “always run tests before merge”\n- “security scans required for dependencies”\n\n### D) **Behavioral guardrails (how decisions are made)**\n\n- require structured outputs for decisions\n- require citing evidence (tool output references)\n- require explicit uncertainty (“I’m not sure; need X to proceed”)\n\nGuardrails should be enforced by **symbolic logic** even if they are reasoned about agentically.\n\nIn practice that symbolic enforcement may appear as:\n\n- orchestrator process rules (hard stops)\n- symbolic tools callable by the harness (“is this allowed?”)\n- symbolic tasks invoked by orchestration (validators, scanners)\n\n---\n\n## 7) Quality gates: turning agentic work into reliable outcomes\n\nQuality gates are how you convert “it seems done” into **“it is done.”**\n\n### Common gated steps\n\n- unit tests, integration tests\n- lint, formatting\n- type checking\n- static analysis, security scans\n- reproducibility checks (clean run in fresh environment)\n- diff review rules (no touching certain files)\n- performance thresholds\n\n### A useful mental model\n\nEach phase should end with:\n\n- **Artifact:** the work product (patch, doc, config, report)\n- **Evidence:** proof that it meets requirements (logs, test output, checks)\n\n**If you don’t have evidence, you don’t have completion.**\n\n### Where gates live (symbolic, reusable)\n\nGates are symbolic logic and should be consistent everywhere:\n\n- the orchestrator uses them to decide phase progression\n- the harness can call them as symbolic tools to pre-check during work\n- the orchestrator can run them as symbolic tasks to verify evidence objectively\n\n### “Quality gates” are also where humans belong\n\nFor high-impact steps, include explicit checkpoints such as:\n\n- “approve the plan before execution”\n- “approve the diff before merge”\n- “approve the deployment”\n\nThese are not signs of weakness. They are how you keep autonomy productive.\n\n---\n\n## 8) Prompt quality is determinism engineering\n\nIn a two-loop system, prompts are not just text. They function like **configuration** for the harness.\n\n### Why prompt quality matters\n\nBetter prompts reduce:\n\n- output variance\n- tool misuse\n- hidden assumptions\n- inconsistent formatting\n- unpredictable branching\n\nThis improves:\n\n- repeatability\n- debuggability\n- fork comparisons\n- safe automation\n\n### The real goal is structural consistency\n\nYou don’t need identical wording. You need consistent:\n\n- decision formats\n- priorities\n- stop/ask conditions\n- evidence standards\n\n### Prompt versioning is essential\n\nTreat harness prompts like a real engineering surface:\n\n- version them\n- log them\n- regression-test them\n- compare them across forks\n\nThis is how prompt iteration becomes systematic rather than chaotic.\n\n---\n\n## 9) The journal: making hybrid execution testable\n\nA journaled control plane turns agentic behavior into something you can:\n\n- replay\n- inspect\n- diff across forks\n- audit\n- analyze for failure patterns\n\n### What must be journaled (conceptually)\n\n- inputs and signals\n- stage transitions\n- requested actions and results\n- artifacts produced\n- evidence and gate outcomes\n- approvals, rejections\n\nThis is the foundation for time travel debugging and safe branching.\n\n---\n\n## 10) A concrete workflow example (conceptual)\n\nScenario: “Implement feature X safely”\n\nOrchestrator defines the process:\n\n- Understand scope\n- Plan\n- Implement\n- Validate\n- Review\n- Finalize\n\nEach phase has:\n\n- allowed actions\n- budget\n- required evidence\n- symbolic checks (gates) used consistently across the system\n\nSymbolic tasks/tools help both loops:\n\n- orchestrator may invoke symbolic tasks (preflight checks, analyzers, validators)\n- harness may invoke symbolic tools (policy checks, gate evaluation, scope rules)\n\nHarness performs work inside phases:\n\n- finds relevant files\n- edits code\n- runs tests\n- iterates until passing\n- produces patch + summary\n\nQuality gates enforce outcomes:\n\n- tests pass\n- lint passes\n- no forbidden files changed\n- diff looks safe\n\nThe orchestrator uses gate results to decide whether to advance, retry, request human approval, or fork.\n\n---\n\n## 11) Common failure modes (and fixes)\n\n### 1) **Everything is agentic**\n\nSymptom: unpredictable behavior, hard to debug, inconsistent safety.  \nFix: move gates, budgets, and invariants into symbolic orchestration.\n\n### 2) **Everything is symbolic**\n\nSymptom: brittle workflows, poor adaptation, high maintenance.  \nFix: delegate fuzzy decisions and exploration to the harness.\n\n### 3) **Hidden state**\n\nSymptom: the harness “remembers” things the system never logged.  \nFix: journal what matters. The system’s truth must be reconstructible.\n\n### 4) **Wide tool surface**\n\nSymptom: tool confusion, increased risk, unpredictable results.  \nFix: keep tools small, stable, and well-described.\n\n### 5) **No explicit evidence requirements**\n\nSymptom: “done” claims without proof.  \nFix: define completion as **artifact + evidence**, enforced by gates.\n\n---\n\n## 12) A simple doctrine for building these systems\n\nIf you define only a few principles, make them these:\n\n1. **The orchestrator owns run progression, journaling, and phase boundaries**\n2. **Symbolic logic owns constraints, permissions, budgets, and gates** (usable as rules + tools + tasks)\n3. **The harness owns adaptive work inside constraints**\n4. **Guardrails are enforced by symbolic checks, not informal intentions**\n5. **Quality is evidence-driven, not assertion-driven**\n6. **Prompts are versioned control surfaces for harness behavior**\n7. **The journal is the source of truth for replay, audit, and forking**\n\n---\n\n## 13) Practical design starting point\n\nIf you’re building from scratch, start here:\n\n1. Define the phases of work (a small symbolic process)\n2. Define effects/tools available in each phase\n3. Add budgets and permissions\n4. Decide quality gates per phase\n5. Add a harness that can do real work (files + terminal + tools)\n6. Journal everything needed for replay and audit\n7. Add fork + time travel as first-class operations\n\nIf you do only one thing: **make completion require evidence.**\n\n---\n\n## Closing note\n\nTwo-loop hybrid systems are not about “LLM vs rules.” They are about designing who owns which execution decisions so that:\n\n- the system remains reliable and inspectable\n- the harness remains capable and efficient\n- progress remains measurable\n\nWhen done well, you get autonomy that is **bounded, testable, and steadily improvable**.\n",
    "documents": []
  },
  "outgoingEdges": [],
  "incomingEdges": [
    {
      "from": "page:docs-articles",
      "to": "page:docs-articles-hybrid-agentic-systems-two-loops-control-plane-guide",
      "kind": "contains_page"
    }
  ]
}

Shortcuts

Back to overview
Open graph tab