Marketing Studio — Design Spec
Marketing Studio — Design Spec Date: 2026 04 28 Status: Approved — ready for implementation planning Branch: feature/marketing agents Client: DreamBorn Problem The marketing operation exists — Nova plans, Harper writes, Jade posts, Rosa designs — but there's no place to actually work. Content lives in Google Drive folders. Justin's "approval" is an exec gate...
Date: 2026-04-28 Status: Approved — ready for implementation planning Branch: feature/marketing-agents Client: DreamBorn
---
The marketing operation exists — Nova plans, Harper writes, Jade posts, Rosa designs — but there's no place to actually work. Content lives in Google Drive folders. Justin's "approval" is an exec gate that shows up in an inbox. Pieces that go to his personal LinkedIn and X need his voice on them, not just his sign-off, but the workflow doesn't give him a workspace — it gives him a file to read and a button to click. The loop between what agents produce and what Justin actually wants to publish is broken because there's no studio — no place where strategy gets set, themes get shaped, drafts get edited, and the full week's content pipeline is visible and in-motion at once.
---
Justin operates as a creative director — not a content approver. He sets themes, shapes angles, finishes the voice on pieces that carry his name, and the agents handle everything else at volume. The Studio surfaces ideas, produces drafts, generates images, and publishes automatically. Justin's time in it is judgment and taste, not production. The output is an enormous, always-on content operation across every channel that sounds like him because he was actually involved — not because an agent guessed.
---
- Every piece on Justin's personal channels (LinkedIn, X) has his voice on it before it publishes — no exceptions
- The Studio is the only place Justin does marketing work — no Drive, no inbox, no other tool
- Agent-to-agent communication is machine-readable JSON — no prose workflow documents between agents
- Approved content publishes automatically — zero manual posting steps after approval
- Volume never excuses generic output — the system produces at scale without sounding like a bot
---
- Justin clicking Approve without editing — just clearing a queue
- Content that opens with "AI is transforming industries…" or any generic AI take
- A piece going out on Justin's LinkedIn that doesn't sound like Justin
- Nova proposing themes Justin would never use
- Content stuck in a queue because something broke and nobody noticed
- Justin going anywhere other than the Studio to do marketing work
---
- 14+ pieces published per week across all 5 channels, consistently
- Justin's active time in the Studio under 2 hours/week
- 100% of approved content auto-publishes with no manual steps
- Zero weeks where a channel goes dark due to pipeline failure
---
Three interconnected systems:
``
BezelBrain Content Intelligence Layer Studio
(idea + thinking) + (evidence + voice + visual) → (the workspace)
↓ ↓ ↓
Nova retrieves Rosa retrieves Justin directs,
concepts before visuals by concept edits, approves,
planning before generating publishes
``
---
A unified semantic library. One table, six asset types, one retrieval model. Every asset is stored with a concept embedding (pgvector, text-embedding-3-small, 1536-dim) — same infrastructure as agent_memory and BezelBrain.
| Type | Description | Who adds | Who reads |
|---|---|---|---|
| thinking | Ideas, observations, architecture insights | Justin via BezelBrain | Nova |
| evidence | Stats, data points, research — with source + date + context | Justin via Studio quick-capture | Harper, Jade |
| voice | Justin's own quotes and phrases worth keeping | Justin via Studio | Harper, Jade |
| social_proof | Customer testimonials, outcomes, stories | Justin via Studio | Harper, Jade |
| visual | AI-generated backgrounds, brand elements, Rosa's outputs | Rosa (auto) | Rosa |
| published | Every published piece with engagement data | Ivy (auto) | Nova, Harper |
``sql
id uuid primary key
client_id text -- 'dreamborn'
type text -- thinking | evidence | voice | social_proof | visual | published
concept text -- human-readable description of what this is
embedding vector(1536) -- pgvector — semantic search index
content text -- the asset content (stat text, quote, testimonial, etc.)
metadata jsonb -- type-specific fields (see below)
source text -- URL, person name, event, etc.
source_date date -- when the stat/quote was originally from
tags text[]
used_count int default 0 -- how many times retrieved + used
last_used_at timestamptz
created_by text -- 'justin' | agent slug
created_at timestamptz default now()
``
```json // evidence { "source_name": "Gartner", "source_url": "...", "verified": true }
// voice { "context": "June workshop", "platform": "spoken | written" }
// social_proof { "company": "Acme Co", "person": "John Smith", "role": "VP Supply Chain", "outcome": "..." }
// visual { "generation_prompt": "...", "model": "gemini-imagen-3 | higgsfield", "dimensions": "1200x630", "piece_id": "..." }
// published { "channel": "linkedin", "published_url": "...", "piece_id": "...", "engagement": {} } ```
Justin pastes a stat, quote, or testimonial into the Studio capture panel. Studio:
1. Auto-detects type (evidence / voice / social_proof) based on content pattern
2. Extracts source, date, attribution where present
3. Embeds via OpenAI → stores in cil_assets
4. Available to all agents in next run