Security Warning: OpenClaw grants AI full system access

memU Installation Guide

24/7 proactive memory framework for AI agents. Build long-term memory, understand user intent, and reduce LLM token costs.

What is memU?

memU is a memory framework built for 24/7 proactive agents. It continuously captures and understands user intent, allowing agents to act proactively without explicit commands.

  • 24/7 Always-On: Background memory agent that never sleeps
  • Cost Efficient: Reduces LLM token costs by caching insights
  • File System Structure: Hierarchical memory organization

Key Use Cases

  • Personal Assistants: Remember preferences from casual mentions
  • Email Management: Learn communication patterns, draft replies
  • Trading/Finance: Track market context and investment behavior
  • Self-Improving Agents: Learn from execution logs

Prerequisites

  • Python 3.13+

    memU requires Python 3.13 or higher. Check with python --version

  • OpenAI API Key

    Required for LLM operations. Get one from platform.openai.com

  • PostgreSQL (Optional)

    For persistent storage. Can use in-memory for testing.

Installation

Step 1: Clone Repository

# Clone the repository
git clone https://github.com/NevaMind-AI/memU.git
cd memU

# Install dependencies
pip install -e .

Step 2: Set API Key

export OPENAI_API_KEY=your_api_key_here

Step 3: Test Installation

python tests/test_inmemory.py

Persistent Storage (PostgreSQL)

For production use, configure PostgreSQL with pgvector extension for persistent memory storage:

# Start PostgreSQL with pgvector
docker run -d \
  --name memu-postgres \
  -e POSTGRES_USER=postgres \
  -e POSTGRES_PASSWORD=postgres \
  -e POSTGRES_DB=memu \
  -p 5432:5432 \
  pgvector/pgvector:pg16

# Test with persistent storage
export OPENAI_API_KEY=your_api_key
python tests/test_postgres.py

Basic Usage

from memu import MemUService

# Initialize service
service = MemUService()

# Store a memory
result = await service.memorize(
    resource_url="conversation.json",
    modality="conversation",
    user={"user_id": "123"}
)

# Retrieve memories
memories = await service.retrieve(
    queries=[{"text": "What are their preferences?"}],
    where={"user_id": "123"},
    method="rag"
)

Core APIs

memorize()

Processes inputs and immediately updates memory. Supports conversations, documents, images, video, and audio.

retrieve()

Dual-mode retrieval: RAG for fast context assembly, LLM for deep reasoning and intent prediction.

Custom LLM Providers

Using Custom LLM and Embeddings

from memu import MemUService

# Configure custom LLM provider
service = MemUService(
    llm_profiles={
        "default": {
            "base_url": "https://api.openai.com/v1",
            "api_key": "your_api_key",
            "chat_model": "gpt-4",
            "client_backend": "sdk"
        },
        "embedding": {
            "base_url": "https://api.voyageai.com/v1",
            "api_key": "your_voyage_api_key",
            "embed_model": "voyage-3.5-lite"
        }
    }
)

Using OpenRouter

Access multiple LLM providers through a single API:

from memu import MemoryService

# Use OpenRouter for multi-provider access
service = MemoryService(
    llm_profiles={
        "default": {
            "provider": "openrouter",
            "client_backend": "httpx",
            "base_url": "https://openrouter.ai",
            "api_key": "your_openrouter_api_key",
            "chat_model": "anthropic/claude-3.5-sonnet",
            "embed_model": "openai/text-embedding-3-small",
        },
    },
    database_config={
        "metadata_store": {"provider": "inmemory"},
    },
)

Cloud Version

Don't want to self-host? Use the managed cloud service:

  • • Hosted at memu.so
  • • 24/7 continuous learning without infrastructure management
  • • REST API with real-time processing
  • • Enterprise deployment available
Try Cloud Version

Important Notes

Memory Structure

memU treats memory like a file system with categories (folders), items (files), and cross-references (symlinks). This enables intuitive navigation and organization.

Proactive vs Reactive

Unlike traditional RAG systems, memU continuously monitors and predicts user intent. Use method="rag" for fast proactive context, method="llm" for deep reasoning.

Performance

memU achieves 92.09% accuracy on the Locomo benchmark. Continuous learning happens in the background without blocking user interactions.

Resources

Ready to add long-term memory to your AI agents?