Overseas access: www.kdjingpai.com
Ctrl + D Favorites

MemOS is an open source system focused on providing memory enhancements for Large Language Models (LLMs). It helps models better store, retrieve, and utilize contextual information through innovative memory management and scheduling mechanisms. memOS excels in tasks such as multi-hop inference, open-domain Q&A, and temporal inference, delivering significant performance improvements over traditional models, such as a 1,59% increase in temporal inference accuracy. it supports the Linux platform, which makes it easy for developers to integrate and extend, and is suitable for building smarter AI applications. It supports the Linux platform and is easy for developers to integrate and extend, making it suitable for building smarter AI applications. The project is active on GitHub, with extensive community support, and developers can contribute via GitHub Issues, Discussions, or Discord.

MemOS: An Open Source System for Enhancing the Memory Capacity of Large Language Models-1

 

Function List

  • Memory Augmented Generation (MAG): provides a unified API interface to support chatting and reasoning by models in conjunction with contextual memories.
  • Modular memory architecture (MemCube): flexible management of multiple memory types for developers to customize and extend.
  • Text Memory Management: Supports storage and retrieval of structured or unstructured text knowledge, suitable for rapid knowledge updating.
  • Memory scheduling mechanism: dynamically allocates memory resources to optimize model performance in long context tasks.
  • Version Control and Governance: Provides access control, traceability and interpretation of memories to ensure security compliance.
  • Support for multiple LLM integrations: Compatible with mainstream large language models and enhance their memorization capabilities.
  • Community Collaboration Tools: Supports developer contributions and communication through GitHub Issues, Pull Requests, and Discord.

 

Using Help

Installation process

MemOS currently supports Linux platform, Windows and macOS may have compatibility issues. Below are the detailed installation steps:

  1. clone warehouse
    Use Git to clone your MemOS repository locally:

    git clone https://github.com/MemTensor/MemOS.git
    cd MemOS
    
  2. Installation of dependencies
    Execute the following command to install MemOS (editable mode is recommended):

    make install
    

    Ensure that Python and related dependencies are installed on your system. If you need to use a system based on transformers library, PyTorch needs to be installed, and a CUDA-enabled version is recommended for GPU acceleration:

    pip install torch
    
  3. Install Ollama (optional)
    If you need to integrate with Ollama, you need to install the Ollama CLI first:

    curl -fsSL https://ollama.com/install.sh | sh
    
  4. Verify Installation
    Once the installation is complete, you can verify success by running the sample code or reviewing the documentation. The documentation path is:
    docs/api/info Or visit the online documentation:https://memos.openmem.net/docs/api/infoThe

Usage

The core function of MemOS is to realize memory enhancement generation and memory management through API calls. Below is the detailed operation flow of the main functions:

1. Memory Augmentation Generation (MAG)

MemOS provides a unified API interface that allows developers to implement memory-enhanced chat or reasoning by following the steps below:

  • Initializing MemOS
    Import and initialize the MemOS library in your Python environment:

    from memos import MemoryAugmentedGeneration
    mag = MemoryAugmentedGeneration(model="gpt-4o-mini")  # 示例模型
    
  • Add Memory
    Stores user input or contextual information as memory:

    mag.add_memory(user_id="user1", content="用户喜欢科技新闻")
    
  • Generate a reply
    Call the API to generate a response that combines memories:

    response = mag.generate(query="推荐一些科技新闻", user_id="user1")
    print(response)
    

    MemOS generates more personalized responses based on stored memories, such as "users like tech news".

2. Text memory management

MemOS supports storing and retrieving textual knowledge, which is suitable for rapid updating of knowledge bases:

  • Stored Text Memory
    Use the API to store structured or unstructured text:

    mag.store_text_memory(content="2025年AI趋势:记忆增强技术", tags=["AI", "趋势"])
    
  • retrieve memory
    Retrieve relevant memories based on the query:

    results = mag.search_memory(query="AI趋势", limit=3)
    for result in results:
    print(result["content"])
    

3. Memory scheduling and optimization

MemOS's MemScheduler manages memory resources dynamically, eliminating the need for developers to manually configure them. By default, the system automatically allocates memory resources based on the type of task (e.g. multi-hop reasoning or temporal reasoning). If you need to customize the scheduling, you can adjust it through the configuration file:

config/scheduler.yaml

4. Community collaboration

MemOS encourages developers to participate in community contributions:

  • Submit a question or feature request: Report bugs or suggest new features on the GitHub Issues page.
  • Contribute code: Submit code improvements via GitHub Pull Requests.
  • Join the discussion: Communicate with developers on GitHub Discussions or the Discord server.

Featured Function Operation

MemOS excels in temporal reasoning and multi-hop reasoning tasks. For example, in temporal reasoning, developers can test the following:

mag.add_memory(user_id="test", content="2024年10月,AI会议召开")
mag.add_memory(user_id="test", content="2025年1月,新模型发布")
response = mag.generate(query="AI会议后发生了什么?", user_id="test")
print(response)  # 输出:新模型发布

This is achieved through the MemCube architecture, which ensures that memories are accurately retrieved in chronological order.

 

application scenario

  1. Personalized AI Assistant
    MemOS can add long-term memory capabilities to AI assistants, remembering user preferences (such as favorite types of news or shopping habits) to provide more accurate recommendations and answers.
    For example, developers can build a chatbot for an e-commerce platform that remembers a user's purchase history to enhance the user experience.
  2. Knowledge Management System
    MemOS' text memory management capabilities are ideal for organizations building internal knowledge bases to quickly store and retrieve documents, reports or technical information.
    For example, technical teams can use MemOS to manage project documents and facilitate cross-departmental collaboration.
  3. Education and Research
    MemOS helps researchers store and analyze experimental data or literature records and supports multi-hop reasoning to answer complex questions.
    For example, students can use MemOS to organize their course notes and quickly retrieve relevant knowledge.

 

QA

  1. What platforms does MemOS support?
    Currently it mainly supports Linux, Windows and macOS may have compatibility issues. It is recommended to use it under Linux environment.
  2. Is a specific LLM required?
    MemOS supports a variety of large language models, and by default uses OpenAI's gpt-4o-mini, but other models can be configured based on the documentation.
  3. How can I get involved in MemOS development?
    You can report issues via GitHub Issues, submit Pull Requests to contribute code, or join the Discord community discussion.
  4. What are the performance benefits of MemOS?
    Compared to the traditional model, MemOS improves 159% in temporal inference, improves overall accuracy by 38.98%, and reduces token consumption by 60.95%.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

inbox

Contact Us

Top

en_USEnglish