M
Give Moltbot superpowered memory with one command
Install Now
Documentation

Agent Development

Build background services that read and write data via the Onelist API. Agents automate tasks, enrich content, and extend your knowledge management workflow.

What Are Agents?

Agents are background services that interact with your Onelist data through the API. They run independently, performing automated tasks like:

Content Processing

Extract text from URLs, generate summaries, create representations

Organization

Auto-tag entries, detect duplicates, suggest connections

Retrieval

Generate embeddings, enable semantic search, find related content

Maintenance

Archive old content, compact memories, update metadata

Key Principle

Agents are decoupled from the core application. They communicate exclusively through the REST API, making them portable, scalable, and easy to develop in any programming language.

Agent Architecture

Agents follow a simple API client pattern with a poll-process-write loop:

+------------------+      REST API       +------------------+
|                  | ------------------> |                  |
|     Agent        |   GET /entries      |     Onelist      |
|   (Background    |   POST /entries     |     Server       |
|    Service)      |   PATCH /entries    |                  |
|                  | <------------------ |                  |
+------------------+      JSON           +------------------+
        |
        v
+------------------+
|  External APIs   |
|  (LLMs, Search,  |
|   Embeddings)    |
+------------------+

Agent Loop:
1. POLL: Query for entries needing processing
2. PROCESS: Transform, enrich, or analyze content
3. WRITE: Update entries or create new ones
4. SLEEP: Wait before next iteration

Authentication Flow

Agents authenticate using API keys passed in the Authorization header:

GET /api/v1/entries HTTP/1.1
Host: api.onelist.my
Authorization: Bearer ol_live_abc123xyz789
Content-Type: application/json

Getting Started

Step 1: Create an API Key

Generate a dedicated API key for your agent from the Onelist dashboard:

  1. Navigate to Settings > API Keys
  2. Click Create New Key
  3. Give it a descriptive name (e.g., "Reader Agent - Production")
  4. Select appropriate permissions (read-only or read-write)
  5. Copy and securely store the key

Security Note

Never commit API keys to version control. Use environment variables or a secrets manager. Create separate keys for each agent to enable fine-grained access control and easy revocation.

Step 2: Install the SDK

Install the official Python SDK or use the REST API directly:

# Python SDK
pip install onelist-sdk
# Or with uv
uv add onelist-sdk

Step 3: Initialize the Client

import os
from onelist import Client

# Initialize with API key from environment
client = Client(
    api_key=os.environ["ONELIST_API_KEY"],
    base_url="https://api.onelist.my"  # or your self-hosted URL
)

# Test the connection
me = client.whoami()
print(f"Connected as: {me.email}")

Python SDK Example

Here's a complete example of an agent that processes bookmarked URLs:

import os
import time
from onelist import Client
from onelist.types import Entry, Representation

client = Client(api_key=os.environ["ONELIST_API_KEY"])

def process_url_entries():
    """Find URL entries without summaries and generate them."""

    # Query for entries that need processing
    entries = client.entries.list(
        entry_type="bookmark",
        tags=["needs:summary"],
        limit=10
    )

    for entry in entries:
        try:
            # Fetch the URL content
            content = fetch_url_content(entry.url)

            # Generate a summary (using your preferred LLM)
            summary = generate_summary(content)

            # Create a summary representation
            client.representations.create(
                entry_id=entry.id,
                mime_type="text/plain",
                role="summary",
                content=summary,
                metadata={
                    "generated_by": "reader-agent",
                    "model": "claude-3-haiku",
                    "generated_at": time.strftime("%Y-%m-%dT%H:%M:%SZ")
                }
            )

            # Update tags to mark as processed
            client.entries.update(
                entry_id=entry.id,
                tags_remove=["needs:summary"],
                tags_add=["has:summary"]
            )

            print(f"Processed: {entry.title}")

        except Exception as e:
            print(f"Error processing {entry.id}: {e}")
            # Optionally add an error tag
            client.entries.update(
                entry_id=entry.id,
                tags_add=["error:summary-failed"]
            )

def main():
    """Main agent loop."""
    print("Reader Agent starting...")

    while True:
        process_url_entries()
        time.sleep(60)  # Poll every minute

if __name__ == "__main__":
    main()

Entry Type Conventions for Agents

When agents create or modify entries, follow these conventions to ensure consistency:

Metadata: source_agent

Always include source_agent in metadata to track which agent created or modified an entry:

client.entries.create(
    entry_type="derived_insight",
    title="Weekly reading summary",
    content="This week you read 12 articles...",
    metadata={
        "source_agent": "librarian",
        "agent_version": "1.2.0",
        "generated_at": "2026-01-28T10:00:00Z",
        "source_entries": ["entry_123", "entry_456", "entry_789"]
    }
)

Common Entry Types

Entry Type Created By Purpose
bookmark User / Reader Saved URL with extracted content
note User User-authored content
memory Moltbot / User Conversation context or learned fact
derived_insight Librarian Agent-synthesized content
digest Librarian Periodic summary of activity
search_result Searcher Cached external search results

Processing Tags

Use tags to coordinate work between agents:

Work Queue Tags

  • needs:summary
  • needs:embedding
  • needs:classification
  • needs:compaction

Completion Tags

  • has:summary
  • has:embedding
  • processed:reader
  • error:summary-failed

Best Practices

1 Idempotency

Design operations to be safely retryable. Use unique identifiers and check for existing work before processing:

# Check if already processed before doing work
existing = client.representations.list(
    entry_id=entry.id,
    role="summary"
)

if not existing:
    # Safe to create new representation
    client.representations.create(...)

2 Error Handling

Implement graceful error handling with exponential backoff:

import time
from onelist.exceptions import RateLimitError, APIError

def process_with_retry(entry, max_retries=3):
    for attempt in range(max_retries):
        try:
            return process_entry(entry)
        except RateLimitError:
            wait = 2 ** attempt  # 1, 2, 4 seconds
            time.sleep(wait)
        except APIError as e:
            if e.status_code >= 500:
                time.sleep(5)  # Server error, wait and retry
            else:
                raise  # Client error, don't retry

    # Mark as failed after all retries exhausted
    client.entries.update(
        entry_id=entry.id,
        tags_add=["error:processing-failed"]
    )

3 Rate Limiting

Respect API rate limits and implement throttling:

Plan Rate Limit Recommendation
Free 60 req/min 1 request per second
Pro 600 req/min 10 requests per second
Self-Hosted Configurable Based on your infrastructure
from ratelimit import limits, sleep_and_retry

@sleep_and_retry
@limits(calls=10, period=1)  # 10 calls per second
def make_api_call(client, *args, **kwargs):
    return client.entries.list(*args, **kwargs)

4 Observability

Log all operations for debugging and monitoring:

import logging
import structlog

logger = structlog.get_logger()

def process_entry(entry):
    logger.info("processing_entry",
        entry_id=entry.id,
        entry_type=entry.entry_type
    )

    # ... processing logic ...

    logger.info("entry_processed",
        entry_id=entry.id,
        representations_created=1,
        duration_ms=elapsed
    )

Available Agents

Onelist provides several pre-built agents you can deploy:

Reader Agent

Processes URLs to extract content, generate summaries, and create searchable representations.

URL extraction Summarization Markdown conversion

Librarian Agent

Organizes your knowledge base by auto-tagging, detecting duplicates, and compacting old memories.

Auto-tagging Deduplication Compaction Decay management

Searcher Agent

Generates embeddings for semantic search and maintains the vector index.

Embedding generation Vector indexing Similarity search

Notifier Agent

Sends notifications and digests based on entry activity and user preferences.

Email digests Webhook triggers Slack integration

Sync Agent

Synchronizes entries with external services like Notion, Obsidian, and Roam.

Notion sync Obsidian export Bidirectional sync

Building Your Own Agent

Follow this tutorial to build a custom agent that automatically tags entries based on their content.

Project Structure

my-tagger-agent/
  pyproject.toml
  src/
    tagger/
      __init__.py
      agent.py
      classifier.py
      config.py
  tests/
    test_classifier.py
  Dockerfile
  .env.example

Step 1: Configuration

# config.py
import os
from pydantic_settings import BaseSettings

class Settings(BaseSettings):
    onelist_api_key: str
    onelist_base_url: str = "https://api.onelist.my"
    poll_interval_seconds: int = 60
    batch_size: int = 10
    openai_api_key: str  # For classification

    class Config:
        env_file = ".env"

settings = Settings()

Step 2: Classification Logic

# classifier.py
from openai import OpenAI
from .config import settings

openai_client = OpenAI(api_key=settings.openai_api_key)

TAXONOMY = [
    "work", "personal", "research", "reading",
    "project", "reference", "idea", "task"
]

def classify_content(title: str, content: str) -> list[str]:
    """Use LLM to classify content into taxonomy tags."""

    response = openai_client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {
                "role": "system",
                "content": f"""Classify the following content into one or more
categories from this list: {', '.join(TAXONOMY)}.
Return only the matching categories as a comma-separated list.
If none match well, return 'uncategorized'."""
            },
            {
                "role": "user",
                "content": f"Title: {title}\n\nContent: {content[:1000]}"
            }
        ],
        temperature=0
    )

    tags_str = response.choices[0].message.content.strip()
    return [f"category:{t.strip()}" for t in tags_str.split(",")]

Step 3: Main Agent Loop

# agent.py
import time
import logging
from onelist import Client
from .config import settings
from .classifier import classify_content

logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)

client = Client(
    api_key=settings.onelist_api_key,
    base_url=settings.onelist_base_url
)

def process_batch():
    """Process a batch of untagged entries."""

    # Find entries that need classification
    entries = client.entries.list(
        tags=["needs:classification"],
        limit=settings.batch_size
    )

    processed = 0

    for entry in entries:
        try:
            # Get the primary content
            content = entry.content or ""
            if not content and entry.representations:
                content = entry.representations[0].content

            # Classify the content
            suggested_tags = classify_content(entry.title, content)

            # Update the entry
            client.entries.update(
                entry_id=entry.id,
                tags_add=suggested_tags,
                tags_remove=["needs:classification"],
                metadata_merge={
                    "classified_by": "tagger-agent",
                    "classified_at": time.strftime("%Y-%m-%dT%H:%M:%SZ")
                }
            )

            logger.info(f"Tagged entry {entry.id}: {suggested_tags}")
            processed += 1

        except Exception as e:
            logger.error(f"Error processing {entry.id}: {e}")
            client.entries.update(
                entry_id=entry.id,
                tags_add=["error:classification-failed"]
            )

    return processed

def main():
    """Main agent loop."""
    logger.info("Tagger Agent starting...")
    logger.info(f"Polling every {settings.poll_interval_seconds}s")

    while True:
        try:
            processed = process_batch()
            if processed > 0:
                logger.info(f"Processed {processed} entries")
        except Exception as e:
            logger.error(f"Batch error: {e}")

        time.sleep(settings.poll_interval_seconds)

if __name__ == "__main__":
    main()

Step 4: Dockerfile

FROM python:3.12-slim

WORKDIR /app

COPY pyproject.toml .
RUN pip install .

COPY src/ src/

CMD ["python", "-m", "tagger.agent"]

Step 5: Deploy

# Build and run locally
docker build -t my-tagger-agent .
docker run -d \
  --name tagger-agent \
  -e ONELIST_API_KEY=$ONELIST_API_KEY \
  -e OPENAI_API_KEY=$OPENAI_API_KEY \
  my-tagger-agent

# Or deploy to a cloud service
# Railway, Fly.io, or any container platform

Congratulations!

You've built a custom agent. It will run continuously, polling for entries tagged with needs:classification and automatically categorizing them.

Next Steps