LogoAIAny
Icon for item

Memori

Memori is an open-source SQL-native memory engine designed for LLMs, AI agents, and multi-agent systems, providing persistent, queryable memory using standard SQL databases with a single line of code integration.

Introduction

Overview

Memori revolutionizes how large language models (LLMs) handle memory by offering a seamless, open-source solution that integrates persistent storage directly into SQL databases. Unlike traditional approaches that rely on costly vector databases, Memori leverages standard SQL (SQLite, PostgreSQL, MySQL) for storing conversations, extracting entities, and maintaining context across sessions. This enables AI agents to 'remember' interactions, learn from user preferences, and provide more coherent responses without vendor lock-in.

Key Features
  • One-Line Integration: Simply call memori.enable() to hook into any LLM framework, including OpenAI, Anthropic, LangChain, and LiteLLM. No complex setup required.
  • SQL-Native Storage: Memory is stored in user-controlled databases, making it portable, auditable, and queryable via standard SQL. Export as SQLite for easy migration.
  • Cost Efficiency: Achieves 80-90% savings by eliminating the need for specialized vector stores, using intelligent entity extraction and relationship mapping instead.
  • Intelligent Memory Management: Automatically categorizes memories (facts, preferences, skills, rules), prioritizes context, and runs background agents to analyze patterns every 6 hours.
  • Multi-User Support: Handles isolated memory per user in production apps, with examples for FastAPI multi-user setups.
How It Works

Memori intercepts LLM calls transparently:

  1. Pre-Call Injection: Retrieves relevant memories from the SQL database using a Retrieval Agent (auto mode) or Conscious Agent (one-shot working memory).
  2. LLM Interaction: Injects context into the prompt before forwarding to the provider.
  3. Post-Call Recording: Extracts entities from responses and stores them with full-text search indexes.
  4. Background Processing: Promotes essential memories to short-term storage for faster access.

This architecture supports modes like Conscious (immediate context), Auto (dynamic search), and Combined for optimal performance.

Supported Integrations
  • Databases: SQLite, PostgreSQL, MySQL, Neon, Supabase.
  • Frameworks: OpenAI, Anthropic, LiteLLM, LangChain, AutoGen, CrewAI, and over 100 models via LiteLLM.
  • Examples: Personal assistants, multi-agent chats, research agents, and demos like a mood-tracking diary or web-search researcher.
Getting Started

Install via pip install memorisdk, initialize with Memori(conscious_ingest=True), and enable it. Configuration is flexible via environment variables or ConfigManager for production namespaces.

For more, check the documentation, join the Discord, or explore community contributions under Apache 2.0 license. With over 6,000 stars, Memori is rapidly becoming a go-to for building stateful AI applications.

Information

  • Websitegithub.com
  • AuthorsGibsonAI
  • Published date2025/08/07

Categories