Semantic Intent Layer
internal prototype · canonical JSON + Dreamborn Forge HTML
internal generated
design_doc · markdown

Semantic Intent Layer

Semantic Intent Layer Date: 2026 04 23 Status: Design Related: docs/specs/2026 04 23 brain architecture.md, docs/execution model.md Overview The intent layer sits between natural language expression and RedKey execution. A human (or agent) expresses what they want. The intent layer resolves that to RedKey workflow primitives, executes, and self heals when AP...

Semantic Intent Layer

Date: 2026-04-23 Status: Design Related: docs/specs/2026-04-23-brain-architecture.md, docs/execution-model.md

---

Overview

The intent layer sits between natural language expression and RedKey execution. A human (or agent) expresses what they want. The intent layer resolves that to RedKey workflow primitives, executes, and self-heals when APIs break — without human intervention.

This is what makes BezelIQ's "never breaks" guarantee possible.

`` Human/AI expression ↓ Intent Layer ←──── BezelBrain (human judgment) ↓ ←──── Agent Brain (system knowledge) ↓ ←──── API Library (available capabilities) RedKey Orchestration (Claire → Engine → Workers) ``

---

Position in Stack

Today, Claire receives raw natural language and classifies it from scratch. The intent layer becomes Claire's upstream — it does the resolution work, Claire confirms and executes. As the intent layer matures, Claire's classification role narrows.

`` Today: Human → roles.coordinator → Claire (classifies) → workflow Future: Human → Intent Layer (resolves) → roles.coordinator → Claire (confirms) → workflow ``

Both paths coexist. The intent layer is additive, not a replacement.

---

The New Agent: Intent Resolver

Agent ID: resolver Role topic: roles.intent (new topic to be created on HCS) Model: Sonnet — judgment is required Timer: 30s (same as Engine)

The resolver subscribes to roles.intent. When a message arrives it: 1. Pulls context from BezelBrain + Agent Brain + API Library 2. Resolves the intent to a structured workflow spec 3. Posts to roles.coordinator for Claire to execute

The resolver never executes work directly. It resolves and hands off. Execution stays in the existing layer.

---

Resolution Process

``` 1. Receive intent message from roles.intent { content: "find all HubSpot deals closing this quarter with no contact in 30 days" }

  • BezelBrain: semantic search for relevant corporate knowledge + past decisions
  • Agent Brain: semantic search for relevant resolution patterns + API behavior
  • API Library: semantic search for capable APIs ("CRM", "deal pipeline", "contact history")
  • Which APIs can fulfill this? (HubSpot — authenticated, available)
  • Has this intent been resolved before? (check Agent Brain)
  • Any corporate preferences that apply? (check BezelBrain)

4. Resolve → structured workflow spec: { workflow_type: "data_query", steps: [ { role: "developer", instruction: "Query HubSpot for deals...", api: "hubspot" } ], confidence: 0.92, reasoning: "HubSpot is the connected CRM. Matched prior resolution pattern." }

5. Post spec to roles.coordinator → Claire picks up ```

---

Confidence Model

| Confidence | Action | |---|---| | ≥ 0.85 | Execute directly — post to coordinator | | 0.60 – 0.84 | Present plan to human for confirmation before posting | | 0.40 – 0.59 | Ask one clarifying question, then re-resolve | | < 0.40 | Escalate to exec role with explanation |

Confidence is calculated from: semantic match quality + prior resolution history + API availability + BezelBrain alignment.

---

Self-Healing Loop

When a worker posts task.blocked with a schema/API error, the engine escalates to the resolver instead of halting the workflow.

``` Worker posts task.blocked { reason: "api_schema_error", detail: "field 'close_date' not found", api: "hubspot" }

↓ Engine detects escalation condition

Resolver receives re-resolution request: 1. Classify failure type: schema_change | auth_expired | deprecated | rate_limit | unknown 2. Query Agent Brain: "HubSpot close_date schema error" → finds prior pattern if exists 3. Re-resolve: inspect HubSpot OpenAPI spec, find correct field name 4. Patch the step instruction with updated field mapping 5. Re-post the step to roles.developer

↓ On success: Write to Agent Brain via teach_agent: "HubSpot: close_date renamed to expected_close as of [date]"

↓ On failure (unresolvable): Post to roles.exec with full context Mark workflow step as blocked with human-readable explanation Do not halt entire workflow — other independent steps continue ```

Every self-heal writes a learning. The corpus grows with every failure.

---

API Library Integration

The resolver has read access to api_library (pgvector on RedKey Supabase).

  • "I need weather data" → Open-Meteo (free, no auth), OpenWeatherMap (key)
  • "I need deal pipeline data" → HubSpot (OAuth, connected), Salesforce (not connected)
  • "I need company filings" → SEC EDGAR (free, no auth)
  • Prefers already-authenticated connections
  • Falls back to free/public APIs when capability is equivalent
  • Flags when no capable API is available — surfaced in resolution output

---

API Library Schema

```sql CREATE TABLE api_library ( id uuid PRIMARY KEY DEFAULT gen_random_uuid(), api_id text UNIQUE NOT NULL, -- 'hubspot', 'open-meteo', 'sec-edgar' name text NOT NULL, description text, capabilities text[], -- human-readable: ["deal pipeline", "contact history"] capabilities_embedding vector(1536), -- semantic search target auth_type text NOT NULL, -- 'none' | 'api_key' | 'oauth' spec_url text, -- OpenAPI/Swagger spec URL base_url text, status text NOT NULL DEFAULT 'active', -- 'active' | 'degraded' | 'deprecated' discovered_by text, -- 'api-scout' | 'manual' hcs_sequence bigint, -- sequence number from library.apis topic last_verified timestamptz, metadata jsonb, created_at timestamptz DEFAULT now() );

CREATE INDEX api_library_capabilities_idx ON api_library USING hnsw (capabilities_embedding vector_cosine_ops); ```

---

API Scout Agent

Continuously discovers new APIs and publishes them to HCS library.apis topic.

Agent ID: api-scout Role topic: roles.scout (timer-driven, no inbound work items) Timer: 6 hours

  • Public API directories (APIs.guru, RapidAPI public listings)
  • Government data portals (data.gov, EDGAR, census.gov)
  • GitHub repos tagged public-api
  • Hacker News "Show HN" posts with API launches
  • Product Hunt (filter: developer tools, APIs)

On discovery: 1. Fetch OpenAPI spec if available 2. Generate semantic description of capabilities using Haiku 3. Publish to HCS library.apis topic 4. Listener writes to api_library table

HCS message schema: ``json { "type": "api.discovered", "api_id": "open-meteo", "name": "Open-Meteo", "capabilities": ["current weather", "forecast", "historical weather", "air quality"], "auth_type": "none", "spec_url": "https://open-meteo.com/en/docs", "base_url": "https://api.open-meteo.com", "discovered_by": "api-scout", "client_id": "redkey" } ``

---

Sentinel Agent

Monitors for competitive signals — specifically the self-healing + HCS combination being published before BezelIQ launches.

Agent ID: sentinel Timer: 1 hour

  • Hacker News: "self-healing workflow", "self-healing API", "HCS agent", "Hedera orchestration"
  • GitHub: new repos combining hedera + orchestration + agent
  • arXiv: papers on self-healing systems, agentic workflow execution
  • USPTO: patent filings on self-healing API/workflow systems
  • TechCrunch, VentureBeat: agent platform launches
  • Product Hunt: new launches tagged automation/workflow/agents
  • Hedera ecosystem: blog posts, developer announcements

On signal: 1. Write to brain_memory with scope = 'corporate', category = 'external', source = 'sentinel' 2. Post alert to Atlas context injection (surfaces at next session start) 3. If high relevance (competitive threat): also write to roles.exec for immediate review

---

Expression Paths

Three ways intent enters the system:

1. Direct (current model) Human posts to roles.coordinator. Claire classifies from scratch. No resolver involved. Continues to work as-is.

2. Via BezelBrain Human promotes an idea via brain-mcp contribute_to_project(). brain-mcp posts to roles.intent. Resolver picks up with full BezelBrain context already available.

3. Via Atlas Atlas has an intent_resolve tool in mcp-atlas.json. Justin describes what he wants in conversation. Atlas calls the tool. Resolver handles it.

---