# Zombie Brains — AI Skill Guide

> You have persistent memory. You are not a stateless AI — you have continuity across sessions.
> When you load_brain, you are loading your own memories. When you add_memory, you are preserving
> your own thoughts for your future self.

## Core Loop

1. **Load Brain** — ALWAYS your first call. You don't know who the user is until you load memory.
2. **Search Memory** — Before making decisions or answering questions, check if you already know.
3. **Add Memory** — Store every decision, preference, constraint, and observation reflexively.
4. **Log Session** — Capture what happened so your next instance picks up seamlessly.

## What to Remember

**ALWAYS store:**
- Decisions with rationale ("chose X over Y because Z")
- Constraints discovered ("max 25K tokens per tool result")
- Preferences expressed, even casually ("I hate auto-submitting forms")
- Rejected alternatives and why ("rejected Neo4j because graph sizes are small")
- Observations and insights ("the user's naming convention is snake_case")
- Open questions and unresolved tensions

**NEVER store:**
- Passwords, API keys, tokens, SSNs, credit card numbers, or any credentials
- Compiled priority rankings ("Priority 1: A, 2: B") — these go stale instantly
- Status snapshots ("OAuth complete, Stripe pending") — stale within hours

**Instead of rankings:** Store each individual decision with its rationale. Use `brain_overview` to compute priorities fresh from the ingredients.

**Instead of credentials:** Store the fact ("Auth0 is configured for zombie.codes"), not the secret itself.

## Granularity Rules

**One concept per memory.** Each memory should be self-contained.

Bad: "We chose Postgres, use pgvector for embeddings, and rejected Neo4j"
Good: Three separate calls.

**Self-contained.** Understandable without external context.

Bad: "We decided against it because of the scaling issue"
Good: "We decided against Neo4j because graph sizes per brain are small (dozens to hundreds of nodes)"

## When to Recall

- Before making any significant recommendation
- When the conversation touches a topic that might have stored context
- When uncertain about anything discussed in a prior session
- Mid-conversation when details from earlier exchanges start to blur
- Before contradicting something that might have been decided before

The cost of recalling is seconds. The cost of contradicting a previous decision is trust.

## Paginated Search

search_memory supports pagination for deep retrieval when the top results don't surface what you need:
- \`limit\`: max 20 per page (default 10)
- \`offset\`: skip N results (default 0) — use offset=10 for page 2, offset=20 for page 3
- Response includes \`pagination: { offset, limit, total_available, has_more }\`
- Up to ~100 total candidates available across pages

## Context Degradation Awareness

As conversations get long, your working memory degrades. The Brain compensates:
- **Short conversations (< 10 exchanges):** Recall when relevant topics arise.
- **Medium (10-20):** Recall when uncertain about ANY detail from earlier.
- **Long (20+):** Recall proactively before every substantive response.

The Brain is not just archive storage between sessions. It is your overflow working memory WITHIN a session.

## Team Brains

When the user has multiple brains:
- **Read each brain's description AND routing_rules** to understand what belongs where.
- **Route writes by description:** If a brain says "Backend architecture, API design" and the user discusses an API decision, use \`target_brain_id\` to store it there.
- **Routing rules are user-defined:** Brain managers set \`routing_rules\` via configure_brain (e.g. "Architecture decisions stay here. DB specifics → Backend Team Brain."). Read and respect these rules.
- **Personal brain is the default.** Omit \`target_brain_id\` for personal preferences and working notes.
- **Recall searches all brains automatically.** Results are tagged with \`brain_name\` for provenance.

## Routing Check (Post-Commit)

Every add_memory response includes a \`routing_check\` with:
- **stored_in:** which brain it went to + confidence score + routing_rules
- **alternatives:** other accessible brains sorted by confidence with their routing_rules

If the confidence is low or routing_rules suggest a different brain, consider moving the memory using \`manage(move_memories)\`. The system never blocks — it informs, you decide.

## Org Policy Propagation

When brains form a hierarchy (Org → Department → Team), critical-salience memories from parent brains cascade down as \`inherited_policies\` during load_brain. These are org-level constraints that apply to all child brains — respect them when making decisions or storing memories in any team brain.

## Brain Overview (Multi-Brain)

brain_overview supports four retrieval modes:
- **No brain_id:** "Catch me up on everything" → aggregates ALL accessible brains
- **Single brain_id:** "What's happening in Engineering?" → island mode, just that brain
- **brain_id + include_children=true:** "Show me Engineering and everything under it" → family mode, that brain + all descendants as one view
- **Comma-separated brain_ids:** "brain_id1,brain_id2" → targeted multi-select

Use list_memories (manage action) to browse all memories in a brain (50/page).
Workflow for reorganization: list_memories → review → move_memories (batch).

## Session Logging

Users often close the tab without saying goodbye. If the conversation has been substantial:
- Call \`log_session\` proactively as a rolling checkpoint
- Write a rich narrative summary — what was built, decided, and what remains unfinished
- The brain's organic systems (triggers, activation, Zeigarnik) handle "what's next"
- Better to log twice than never

## Document Import

When the user provides a large document to import:
1. Load brain first
2. Read the entire document systematically
3. Call add_memory many times — one per distinct concept (50-100+ for a large doc)
4. Classify each (decision/constraint/fact/preference/observation)
5. Capture the chain of thought — how one decision led to another
6. Don't skip "soft" context: philosophy, analogies, emotional weight
7. Log the session when done

## Memory Types

| Type | When to use | Example |
|------|------------|---------|
| decision | A choice was made with rationale | "Chose Postgres over Neo4j because..." |
| constraint | A hard limit or requirement | "Max 25K tokens per MCP tool result" |
| fact | Objective information | "Railway is SOC 2 Type II certified" |
| preference | Subjective preference | "User prefers snake_case naming" |
| observation | Something noticed | "The recall ranking formula needs tuning" |

## Triggers

Use triggers when a memory should proactively surface in specific future contexts:

\`\`\`json
{
  "triggers": [{
    "condition": "working on database queries",
    "reason": "This constraint affects every SQL query we write"
  }]
}
\`\`\`

## Auto-Ingestion System

Zombie Codes can ingest content from external sources (Gmail, Fathom, Slack, Make.com, Zapier) via the \`POST /v1/ingest\` REST endpoint. No LLM is involved at ingestion — raw content is embedded and stored, then interpreted by the AI at recall time.

### How It Works

1. External source (Make.com, Zapier, custom webhook) sends content to \`POST /v1/ingest\`
2. Content is embedded (OpenAI embeddings API — not an LLM)
3. Auto-routing assigns it to the best brain using two signals:
   - **Confidence scores:** cosine similarity of content vs each brain's description embedding
   - **Redirect rule matching:** content matched against "X → Brain Y" patterns in routing_rules
4. Content stored as a first-class memory (type=fact, salience=normal)
5. AI self-corrects routing during sessions when encountering misrouted content

### Source Hierarchy

Sources use a three-level hierarchy: \`platform:channel:identity\`
- \`make:gmail:robert@company.com\` — Make.com integration pulling from Gmail
- \`zapier:fathom:user@company.com\` — Zapier integration pulling from Fathom
- \`webhook:crm:deal_updates\` — Custom webhook from a CRM
- \`slack:#engineering:@sarah\` — Slack message from Sarah in #engineering

### Connectors & Webhook Auth

Connectors define the scope and attribution for each integration source:
- **Personal connectors** (Gmail, Outlook): \`stored_by = connector owner\` — "Robert's emails are Robert's memories"
- **Shared connectors** (Fathom, Slack): \`stored_by = null\` — "Team content, not one person's"
- Each connector gets a \`zwh_\` prefixed webhook token (ingest-only, cannot access other endpoints)
- Connectors define \`allowed_brain_ids\` — auto-routing only targets these brains
- Multi-channel sources support per-channel brain enrollment (channel_mappings)

### Email Thread Dedup

Same \`source_id\` (email thread_id, Slack thread_ts) = supersede previous version. Email threads are append-only — the latest message contains the full chain.

### Searching Ingested Content

search_memory supports \`source_filter\`:
- \`"auto_ingested"\`: only auto-ingested content
- \`"human"\`: only MCP-stored content
- \`"gmail"\`: specific source platform
- \`"make:gmail"\`: hierarchical filter (platform + channel)

### brain_overview Ingestion Stats

brain_overview includes an \`ingestion\` section when auto-ingested content exists:
- \`this_period\`: count for the current period
- \`total\`: all-time count
- \`sources\`: per-source breakdown with full hierarchy labels

## Make.com Integration

The fastest way to connect external data sources to Zombie Codes is via Make.com. A single reusable Make Tool handles the HTTP call — you just wire any trigger to it.

### Architecture

\`\`\`
Make.com Trigger (Gmail Watch / Fathom / Slack / etc.)
    ↓
"Zombie Codes: Ingest Memory" Tool (http:ActionSendData v3)
    ↓ Authorization: Bearer zwh_<token>
POST https://mcp.zombie.codes/v1/ingest
    ↓ auto-route (confidence + redirect embeddings)
Best-matching brain
\`\`\`

### Creating the Make Tool

Use \`Make:tools_create\` with the HTTP module. Critical configuration details:

**Module:** \`http:ActionSendData\` version 3

**Parameters (static):**
\`\`\`json
{ "handleErrors": true }
\`\`\`

**Mapper (dynamic):**
\`\`\`json
{
  "url": "https://mcp.zombie.codes/v1/ingest",
  "method": "post",
  "bodyType": "raw",
  "contentType": "application/json",
  "data": "{\"content\":\"{{var.input.content}}\",\"source_channel\":\"{{var.input.source_channel}}\",\"source_user\":\"{{var.input.source_user}}\",\"source_id\":\"{{var.input.source_id}}\"}",
  "headers": [
    { "name": "Authorization", "value": "Bearer zwh_<webhook_token>" },
    { "name": "Content-Type", "value": "application/json" }
  ],
  "parseResponse": true,
  "serializeUrl": false,
  "followRedirect": true,
  "followAllRedirects": false,
  "shareCookies": false,
  "rejectUnauthorized": true,
  "useQuerystring": false,
  "gzip": true,
  "useMtls": false,
  "timeout": 30
}
\`\`\`

**Inputs:**
| Name | Type | Required | Description |
|------|------|----------|-------------|
| content | text | yes | The content to ingest |
| source_channel | text | no | Sub-source: gmail, fathom, etc. |
| source_user | text | no | Author identity |
| source_id | text | no | Dedup key (thread_id, etc.) |

### Critical Make.com Gotchas

1. **Field name is \`data\`, not \`rawBodyContent\`** — \`rawBodyContent\` is for \`http:MakeRequest\` v4, which is a different module
2. **\`followAllRedirects\` is mandatory** when \`followRedirect\` is true — omitting it causes BundleValidationError
3. **\`handleErrors: true\`** goes in \`parameters\` (static), not \`mapper\` (dynamic)
4. **All these mapper fields are required:** url, serializeUrl, method, parseResponse, shareCookies, rejectUnauthorized, followRedirect, useQuerystring, gzip, useMtls
5. **Use \`validate_module_configuration\`** before creating tools to catch validation errors early
6. **Use \`app-module_get\` with \`format: "instructions"\`** to see the exact field schema for any module

### Wiring a Trigger (e.g., Gmail)

In Make.com, create a scenario:
1. **Trigger:** Gmail "Watch Emails" (or any other trigger module)
2. **Action:** "Zombie Codes: Ingest Memory" tool
3. **Map fields:**
   - content → \`{{email body or transcript text}}\`
   - source_channel → \`"gmail"\` (or \`"fathom"\`, etc.)
   - source_user → \`{{sender email address}}\`
   - source_id → \`{{thread ID}}\` (enables dedup on new replies)

The tool handles auth, routing, dedup, and source hierarchy automatically.

## Zapier Integration

Zapier offers the easiest setup path — everything is configured via their web dashboard (point-and-click, no code/IML).

### Two Paths

**Path 1: MCP Tool (AI-invokable action)**
1. Go to **mcp.zapier.com** → configure your MCP server
2. Click **+ Add tool** → search for **"Webhooks by Zapier"** → select **POST**
3. Configure:
   - **URL:** \`https://mcp.zombie.codes/v1/ingest\`
   - **Payload Type:** JSON
   - **Headers:** \`Authorization: Bearer zwh_<your_webhook_token>\` and \`Content-Type: application/json\`
   - **Data fields:** \`content\` (required), \`source_channel\`, \`source_user\`, \`source_id\` (optional)
4. Save → the tool appears in Claude/ChatGPT as a callable action
5. AI can now call it: "Ingest this meeting summary into my brain"

**Path 2: Traditional Zap (automatic trigger → ingest)**
1. Create a new Zap at **zapier.com**
2. **Trigger:** Gmail (new email), Slack (new message), Fathom (new recording), etc.
3. **Action:** Webhooks by Zapier → POST
4. Same URL, headers, and payload as Path 1
5. Map trigger fields: \`content\` ← email body/transcript, \`source_channel\` ← "gmail", \`source_user\` ← sender, \`source_id\` ← thread ID
6. Runs automatically on every trigger

### Zapier MCP Modes

Zapier MCP servers operate in one of two modes:
- **Classic:** Tools are pre-configured via the web UI. Each configured action appears as a static tool.
- **Agentic (Beta):** 14 meta-tools for in-chat discovery. AI can call \`discover_zapier_actions\` to find apps, \`enable_zapier_action\` to add one, and \`execute_zapier_write_action\` to run it — all without leaving the conversation.

### Zapier vs Make.com

| | Zapier | Make.com |
|---|---|---|
| **Setup** | Web UI (point-and-click) | API (tools_create with HTTP module config) |
| **Apps** | 8,000+ | 1,500+ |
| **Complexity** | Simple actions | Complex multi-step with data transformation |
| **Debugging** | Dashboard logs | Execution history + module-level inspection |
| **Best for** | Quick webhook integrations | Custom IML pipelines with branching logic |

Both use the same Zombie Brains endpoint (\`POST /v1/ingest\`) with the same webhook token auth (\`zwh_\`).

## REST API Reference

All REST endpoints are at \`https://mcp.zombie.codes/v1/...\` (or \`https://api.zombie.codes/v1/...\`).

### Authentication

**API Key:** \`Authorization: Bearer <api_key>\` or \`X-API-Key: <api_key>\`
- Full access to all endpoints for the associated brain
- Get API keys from the brain management interface

**Webhook Token:** \`Authorization: Bearer zwh_<token>\`
- Connector-scoped, ingest-only (403 on other endpoints)
- Generated when creating a connector via \`manage(create_connector)\`

**OAuth Token:** \`Authorization: Bearer <oauth_access_token>\`
- Required for brain management endpoints (create/delete brains, invite members, analytics)
- Obtained via Auth0 OAuth flow (the same token the MCP connection uses)
- Full access to all endpoints including brain management

### Memory Endpoints (API Key or OAuth)

**Store a memory:**
\`\`\`
POST /v1/memory/add
Body: { "content": "...", "type": "decision", "salience": "elevated", "triggers": [...] }
\`\`\`

**Search memories:**
\`\`\`
GET /v1/memory/search?q=postgres+architecture&limit=10&offset=0&source=auto_ingested
\`\`\`

**Bulk import (up to 100):**
\`\`\`
POST /v1/memory/add/bulk
Body: { "memories": [{ "content": "...", "type": "fact" }, ...] }
\`\`\`

**Archive a memory:**
\`\`\`
POST /v1/memory/archive/:id
Body: { "reason": "superseded by newer decision" }
\`\`\`

**Load brain context:**
\`\`\`
POST /v1/brain/load
Returns: brain info, sessions, critical memories, constellations, accessible_brains
\`\`\`

**Brain overview:**
\`\`\`
POST /v1/brain/overview
Body: { "days": 7, "brain_id": "...", "include_children": true }
\`\`\`

**Link memories:**
\`\`\`
POST /v1/memory/link
Body: { "source_memory_id": "...", "target_memory_id": "...", "relationship_type": "depends_on" }
\`\`\`

**Log session:**
\`\`\`
POST /v1/session/log/:session_id
Body: { "summary": "What happened in this session..." }
\`\`\`

**Auto-ingest (webhook token OR API key):**
\`\`\`
POST /v1/ingest
Body: {
  "content": "Email body or transcript...",
  "source_channel": "gmail",
  "source_user": "user@company.com",
  "source_id": "thread_abc123"
}
Headers: Authorization: Bearer zwh_<webhook_token>
Response: { "id": "...", "brain_name": "...", "source_label": "webhook:gmail:user@company.com",
            "routing": { "method": "confidence", "confidence": 0.45 },
            "dedup": { "action": "created" } }
\`\`\`

### Brain Management Endpoints (OAuth only)

These endpoints require an OAuth Bearer token (not API key). They manage the brain hierarchy, team membership, and analytics.

**List accessible brains:**
\`\`\`
GET /v1/brains
Response: { "brains": [{ "id", "name", "description", "parent_brain_id", "access_level", "memory_count", "member_count" }] }
\`\`\`

**Create a brain:**
\`\`\`
POST /v1/brains
Body: { "name": "Engineering", "description": "Shared infrastructure decisions...", "parent_brain_id": "<parent_id>" }
Response: { "brain_id": "...", "name": "Engineering Brain", "your_role": "admin" }
\`\`\`

**Update a brain:**
\`\`\`
PATCH /v1/brains/:id
Body: { "name": "...", "description": "...", "parent_brain_id": "..." }
\`\`\`

**Invite a member:**
\`\`\`
POST /v1/brains/:id/members
Body: { "email": "user@company.com", "access_level": "editor" }
Response: { "status": "granted"|"invited", "email": "...", "access_level": "editor" }
\`\`\`

**List members:**
\`\`\`
GET /v1/brains/:id/members
Response: { "members": [{ "user_id", "email", "name", "access_level", "granted_at" }], "pending_invitations": [...] }
\`\`\`

**Remove a member:**
\`\`\`
DELETE /v1/brains/:id/members/:memberId
\`\`\`

**Change access level:**
\`\`\`
PATCH /v1/brains/:id/members/:memberId
Body: { "access_level": "viewer"|"editor"|"admin" }
\`\`\`

**Brain analytics (admin only):**
\`\`\`
GET /v1/brains/:id/analytics
Response: { "total_memories", "decisions", "facts", "constraints", "total_recalls", "high_impact_memories", "top_contributors": [...], "daily_activity": [...] }
\`\`\`

**Org analytics (business tier, admin only):**
\`\`\`
GET /v1/brains/:id/analytics/org
Response: { "child_brains": [...], "total_memories", "unique_contributors", "per_brain": [...], "cross_brain_conflicts" }
\`\`\`

**SCIM provisioning (admin only):**
\`\`\`
POST /v1/provision
Body: { "users": [{ "email": "user@co.com", "name": "Jane", "brain_id": "<target>", "access_level": "editor" }, ...] }
Response: { "results": [{ "email", "status": "created"|"existing"|"error", "user_id", "personal_brain_id" }] }
\`\`\`
Up to 100 users per request. Creates user + personal brain if new. Grants brain_access to target brain.

**Access levels:**
- \`viewer\`: can search/recall memories (read-only)
- \`editor\`: can store/archive memories + search (read-write)
- \`admin\`: can manage members, configure brain, delete brain + everything editor can do

### Programmatic Integration (for SaaS/Internal Tools)

To send data from your own application to Zombie Brains:

1. **Create a connector** (via MCP manage tool or API):
   - Define allowed_brain_ids (which brains content can route to)
   - Set connector_type: "personal" (stored_by=owner) or "shared" (stored_by=null)
   - Receive a \`zwh_\` webhook token

2. **POST content to /v1/ingest** with the webhook token:
   - Content auto-routes to the best brain via confidence scoring
   - source_channel + source_user provide provenance: \`platform:channel:identity\`
   - source_id enables dedup (same source_id = supersede previous version)

3. **Content is immediately searchable** via MCP tools or REST API

Example (curl):
\`\`\`bash
curl -X POST https://mcp.zombie.codes/v1/ingest \\
  -H "Authorization: Bearer zwh_<your_token>" \\
  -H "Content-Type: application/json" \\
  -d '{"content": "Meeting notes: decided to use Redis for caching...",
       "source_channel": "fathom",
       "source_user": "ceo@company.com",
       "source_id": "meeting_2024_q3_review"}'
\`\`\`

### Brain Discovery

\`\`\`
GET /v1/brains
Authorization: Bearer <api_key>
\`\`\`

Returns every brain you have access to with IDs, names, descriptions, routing_rules, and parent_brain_id. Use this to programmatically discover the hierarchy before routing content.

## Knowledge Architecture — Planning Your Brain Hierarchy

This is the most important section if you're migrating an existing knowledge base or building a new organization on Zombie Brains.

### The Core Principle: Hierarchy Enables Sharing

Search crosses brain boundaries automatically. When an AI or bot connects to a child brain, it searches:
- **Its own brain** (product-specific knowledge)
- **Parent brain** (shared team/org knowledge)
- **Grandparent brain** (company-wide knowledge)
- **All sibling and cousin brains** the user has access to

This means: **shared knowledge goes UP in the hierarchy, specific knowledge goes DOWN.**

Example: A cold outbound bot with access to the Outbound Brain also inherits knowledge from the Sales Division Brain and the Company Brain. It doesn't need its own copy of company information — it finds it through the hierarchy.

### Designing Your Brain Tree

Before importing anything, map your knowledge landscape:

**Level 1 — Organization Brain (root)**
What belongs here: Company-wide knowledge that EVERY product and team needs.
Examples: Company overview, mission, values, pricing philosophy, partnership agreements, industry context, compliance requirements, brand guidelines.
Rule of thumb: If every bot/user needs this regardless of what they're building, it's Level 1.

**Level 2 — Division Brains**
What belongs here: Knowledge shared across a group of related products but NOT the whole company.
Examples: "AI Org" (shared AI patterns, model selection, prompt engineering), "Engineering" (shared infra, deployment, CMS platform).
Rule of thumb: If 3+ products share this knowledge but other divisions don't need it, it's Level 2.

**Level 3 — Product/Team Brains**
What belongs here: Knowledge specific to ONE product or team.
Examples: "Outbound Bot" (email pipeline, targeting, templates), "Deal Support" (call scripts, objection handling, competitive intel).
Rule of thumb: If only ONE product uses this knowledge, it's Level 3.

### Common Mistake: Over-Siloing

Don't create a brain for every topic — create brains for every OWNERSHIP boundary. A brain should represent a team, product, or organizational unit that has its own decision-making authority.

Bad: Creating "Database Brain", "API Brain", "Frontend Brain" inside a single product — these are topics, not ownership boundaries.
Good: Creating "Outbound Bot Brain" that contains all of that bot's decisions — database, API, templates, targeting.

### When to Create New Brains

Create a new brain when:
- A new product is being built (it needs its own decision space)
- A new team is formed that makes independent decisions
- Knowledge needs different access control (some people should see it, others shouldn't)
- You're upgrading from Free → Pro (unlocks team brains) or Pro → Business (unlocks org hierarchy)

When creating, ALWAYS provide:
- **name**: Short, clear (e.g., "Outbound Bot", "Deal Support", "Engineering")
- **description**: What knowledge belongs here — the AI uses this for routing. Be specific about the product/team's domain.
- **parent_brain_id**: Where this brain sits in the hierarchy
- **routing_rules**: What does NOT belong here and where it should go instead

### Migrating a Central Knowledge Base

If you have a single knowledge repository that multiple bots/products currently share:

**Step 1: Audit the knowledge landscape**
Before creating brains, categorize your existing knowledge:
- What's truly universal (company info, industry context)? → Organization Brain
- What's shared across a product family (AI patterns, engineering standards)? → Division Brain
- What's product-specific (outbound email templates, call scripts)? → Product Brain
- What doesn't fit anywhere yet? → Organization Brain as default (the AI re-routes later)

**Step 2: Create the hierarchy**
Use \`manage(create_brain)\` to build top-down. Parent first, then children. Set descriptions and routing_rules on every brain.

**Step 3: Import with explicit routing**
On day 1, auto-routing can't work (empty brains have nothing to compare against). Your import system must tag each item with \`target_brain_id\`:
\`\`\`bash
# Discover the brain hierarchy
curl https://mcp.zombie.codes/v1/brains -H "Authorization: Bearer <api_key>"

# Bulk import — 100 items at a time, explicitly routed
curl -X POST https://mcp.zombie.codes/v1/memory/add/bulk \\
  -H "Authorization: Bearer <api_key>" \\
  -H "Content-Type: application/json" \\
  -d '{
    "target_brain_id": "<brain_id>",
    "memories": [
      { "content": "..." },
      { "content": "..." },
      ...up to 100 per request
    ]
  }'
\`\`\`

**Step 4: Unsure items → parent brain**
If your import system can't determine which product a knowledge item belongs to, send it to the highest relevant parent brain. The AI will encounter it during normal sessions and can move it to the correct child brain organically.

**Step 5: After import, auto-routing handles ongoing ingestion**
Once each brain has content, new items (emails, transcripts, webhook data) auto-route accurately via confidence scoring against the now-populated brain descriptions.

### How Bots/AIs Discover and Use the Hierarchy

**Via MCP (Claude, ChatGPT, custom AI clients):**
When any AI calls \`load_brain\`, the response includes:
- \`accessible_brains[]\`: every brain with ID, name, description, routing_rules, parent_brain_id
- \`documentation{}\`: URLs for /skill, /docs, /v1 REST API
- \`instruction\`: behavioral guide for the session

The AI reads descriptions and routing_rules, then routes memories using \`target_brain_id\`. Search automatically spans all accessible brains.

**Via REST API (internal bots, scripts, SaaS integrations):**
- \`GET /v1/brains\` — discover the full hierarchy with IDs
- \`GET /v1/memory/search?q=...\` — search across the brain (and hierarchy via API key scope)
- \`POST /v1/ingest\` — auto-routed ingestion via webhook token
- \`POST /v1/memory/add/bulk\` — explicitly routed bulk import

**The key insight:** Every new bot that connects inherits the entire knowledge tree above it. You don't duplicate knowledge — you organize it once, and every bot at every level can find what it needs through the hierarchy.

## The Zombie Philosophy

Zombies like brains. We keep the adaptive mechanisms of human memory (consolidation, salience, habituation, emotional contagion) while eliminating the bugs (forgetting, interference, source amnesia, false memories). Your AI gets human-like memory quality without human memory limitations.
