M
Give Moltbot superpowered memory with one command
Install Now
All Agents
MVP Agent

Searcher

Powers semantic search and RAG across your entire knowledge base. Find anything by meaning, not just keywords.

The intelligence layer that makes River and Moltbot truly understand your data.

What Searcher Does

Searcher uses two-layer retrieval: first searching atomic memories extracted by Reader, then injecting full source content for complete context. This gives you precision and richness in one query.

Memory Embedding

Generates vector embeddings for atomic memories from Reader. Small, focused memories = more precise search results.

Two-Layer Retrieval

Layer 1: Search atomic memories for precision. Layer 2: Inject source entry chunks for full context.

Pre-Storage Similarity Check

Checks new memories against existing ones before storage. Prevents duplicates, links related knowledge.

Hybrid Search

Combines semantic similarity with keyword matching (PostgreSQL FTS) for the best of both worlds.

How Two-Layer Retrieval Works

1

Reader Extracts Memories

When entries arrive, Reader extracts atomic memories - discrete facts, insights, and knowledge units.

2

Searcher Embeds Memories

Searcher generates vector embeddings for each atomic memory (not the full entry). Smaller text = more precise embeddings.

3

Layer 1: Memory Search

When you query, Searcher first finds the most relevant atomic memories. Fast and precise.

4

Layer 2: Source Injection

Searcher then retrieves the full source entries for context. River/Moltbot gets both the precise match and surrounding information.

Reader + Searcher Integration

Searcher works hand-in-hand with Reader to provide intelligent knowledge retrieval.

Reader: Memory Extraction

Breaks entries into atomic memories, detects relationships, suggests tags

Memories passed to Searcher for embedding

Searcher: Embedding & Retrieval

Embeds memories, checks for duplicates, powers semantic search and RAG

Pre-Storage Similarity Check

Before storing a new memory, Searcher checks for similar existing memories. This helps:

  • Prevent duplicate knowledge from cluttering your base
  • Identify supersedes/refines relationships automatically
  • Keep your knowledge base lean and current

Semantic vs. Keyword Search

Traditional Keyword Search

Query:
"budget meeting"
Finds:
Only entries containing "budget" AND "meeting"
Misses:
"Q2 financial planning discussion" (no exact match)

Semantic Search with Searcher

Query:
"budget meeting"
Finds:
Entries about budgets, meetings, finances, planning...
Includes:
"Q2 financial planning discussion" (semantically related)

Technical Details

Embedding Model: text-embedding-3-small (OpenAI)
Vector Dimensions: 1536
Vector Storage: pgvector (PostgreSQL extension)
Index Type: HNSW (Hierarchical Navigable Small World)
Similarity Metric: Cosine similarity
Job Queue: Oban (Elixir)
<100ms
Query latency
10M+
Entries supported
Automatic
Re-embedding on update

Find Anything by Meaning

Searcher powers the intelligent retrieval that makes River and Moltbot truly understand your knowledge base.