Your AI memory curator. Extracts atomic memories, detects relationships, suggests tags, and compacts knowledge over time.
Inspired by Supermemory's approach to intelligent memory optimization.
Reader breaks down your entries into discrete, self-contained memory units. Each atomic memory captures a single fact, insight, or piece of knowledge that can be independently retrieved, related, and evolved over time.
Breaks entries into discrete, self-contained memory units. Each memory is independently retrievable and embeddable by Searcher.
Links memories to their source entries and related content. Maintains provenance so you always know where knowledge came from.
Identifies when new memories supersede or refine existing ones. Keeps your knowledge base current without losing history.
Generates concise summaries of entries and memory clusters. Creates scannable representations for quick review.
Analyzes content and suggests relevant tags from your existing taxonomy. Learns your categorization patterns over time.
Periodically consolidates redundant memories. Merges duplicates, archives superseded facts, keeps your knowledge base lean.
Reader evaluates content value and detects low-quality articles - clickbait, thin content, promotional fluff. Helps you maintain a high-quality knowledge base.
Sensational titles, thin content that doesn't deliver
Minimal information, mostly filler text
Primarily advertising or promotional material
Reposts with little added value
"10 AMAZING Ways to Boost Productivity" from buzzfeed.com
Generate spoken versions of your content. Reader intelligently removes fluff and can create different audio formats - full article, summary, or key points only.
Complete content, cleaned for speech
Key content, fluff removed
Just the essential takeaways
Before converting to audio, Reader removes:
Powered by OpenAI TTS and ElevenLabs voice synthesis
When a new entry is created or updated, Reader is triggered via the job queue. Works with entries from any source: manual input, Feeder imports, or API.
Reader analyzes the entry content and extracts atomic memories - discrete facts, insights, or pieces of knowledge that stand alone.
Each memory is compared against existing memories. Reader identifies supersedes (new replaces old) and refines (new adds to old) relationships.
Memories are stored with source references. Searcher then generates embeddings for each memory, enabling semantic retrieval.
"Met with Sarah about Q2 budget. She mentioned the marketing team needs $50K for the new campaign. We agreed to move the deadline to March 15th. Also discussed hiring - she wants to add 2 developers by April."
Reader and Searcher work together to provide intelligent retrieval. Reader extracts memories; Searcher embeds them for semantic search.
Query hits atomic memories (small, fast, precise)
Full entry content injected for complete context
This approach gives you the precision of atomic search with the richness of full-context retrieval.
Over time, knowledge bases accumulate redundant and outdated information. Reader periodically runs compaction to keep things lean.
Duplicate memories combined into one
Superseded memories moved to history
Low-relevance memories deprioritized
Reader transforms raw information into structured, searchable, evolving knowledge.