Persistent AI Memory is an open source project designed to provide persistent local memory storage for AI assistants, language models (LLMs) and Copilot in VS Code. It stores dialog, memory, and tool call records via an SQLite database and supports semantic search and real-time file monitoring. The system runs cross-platform and is compatible with Windows, macOS and Linux, and is easy to install and use out of the box. Users can easily store and retrieve AI interaction data, making it suitable for developers, researchers, and scenarios that require localized AI memories. The project is licensed under the MIT license, which allows free use and modification.
Function List
- Persistent storage: using a SQLite database to keep records of AI conversations, memories, and tool invocations.
- Semantic search: Semantic search of memories and conversations is supported through LM Studio's embedding technology.
- Real-time monitoring: Use watchdog to monitor conversation files and automatically capture exported conversations such as ChatGPT.
- Tool invocation logs: record the usage of Model Context Protocol (MCP) tools to support AI self-reflection.
- Cross-platform support: compatible with Windows, macOS and Linux, adapting to a variety of development environments.
- Simple Installation: Provides one-click script, pip install and many other ways for quick deployment.
- VS Code Integration: Work seamlessly with Copilot in VS Code to improve development efficiency.
- System Health Check: Provides an interface to view database and system status to ensure stable operation.
Using Help
Installation process
Persistent AI Memory offers a variety of installation methods to suit different user needs. Below are the detailed steps:
One-click installation (recommended)
- Linux/macOS: Run the following command:
curl -sSL https://raw.githubusercontent.com/savantskie/persistent-ai-memory/main/install.sh | bash
The script automatically downloads and configures dependencies, making it ideal for getting started quickly.
- Windows (computer): Download and run the batch file:
curl -sSL https://raw.githubusercontent.com/savantskie/persistent-ai-memory/main/install.bat -o install.bat && install.bat
The script will automatically set up the environment and will be ready to use when the installation is complete.
- manual installation::
- Cloning Warehouse:
git clone https://github.com/savantskie/persistent-ai-memory.git cd persistent-ai-memory
- Install the dependencies:
pip install -r requirements.txt pip install -e .
- Optional development dependencies:
pip install -e ".[dev]"
- Cloning Warehouse:
- Direct installation via pip::
pip install git+https://github.com/savantskie/persistent-ai-memory.git
Ideal for users who wish to integrate directly into existing projects.
After installation, the system defaults to an SQLite database, with the storage path beingcustom_memory.db
(customizable). To use a custom embedded service, configure theembedding_service_url
, for example:
memory = PersistentAIMemorySystem(db_path="custom_memory.db", embedding_service_url="http://localhost:1234/v1/embeddings")
Main Functions
- stored memory::
- utilization
store_memory
Functions preserve the knowledge learned by the AI. Example:import asyncio from ai_memory_core import PersistentAIMemorySystem async def main(): memory = PersistentAIMemorySystem() await memory.store_memory("今天学习了Python异步编程") asyncio.run(main())
- Memories are stored in a SQLite database with optional metadata.
- utilization
- semantic search::
- pass (a bill or inspection etc)
search_memories
Functions look up related memories:results = await memory.search_memories("Python编程") print(f"找到 {len(results)} 条相关记忆")
- The system uses LM Studio's embedding technology to return semantically relevant results, with 10 returned by default.
- pass (a bill or inspection etc)
- Dialog Storage and Retrieval::
- Store the dialog:
await memory.store_conversation("user", "什么是异步编程?") await memory.store_conversation("assistant", "异步编程允许...")
- Retrieve the dialog history:
history = await memory.get_conversation_history(limit=100)
- Support for searching specific conversations:
conversations = await memory.search_conversations("异步编程", limit=10)
- Store the dialog:
- Real-time monitoring of dialog files::
- Monitor ChatGPT exported conversation files:
memory.start_conversation_monitoring("/path/to/conversation/files")
- The system automatically captures file changes and stores them in the database.
- Monitor ChatGPT exported conversation files:
- Tool invocation log::
- Record MCP tool calls:
await memory.log_tool_call("tool_name", {"arg": "value"}, "result", metadata={"key": "value"})
- View tool usage history:
tool_history = await memory.get_tool_call_history(tool_name="tool_name", limit=100)
- AI is available through the
reflect_on_tool_usage
Conduct self-reflection and analyze patterns of tool use.
- Record MCP tool calls:
- System Health Check::
- Check the status of the system:
status = await memory.get_system_health() print(status)
- Returns database connection status and system operation information.
- Check the status of the system:
Featured Function Operation
- VS Code Integration: To integrate a project into Copilot for VS Code, ensure that the Python extension and Copilot plugin are installed. After running the project, Copilot can directly access the local memory database, improving the contextual accuracy of code completion.
- Cross-platform compatibility: The system automatically adapts to different operating systems with no additional configuration required; Windows users may need to install Git Bash to run shell scripts.
- Semantic Search Optimization: With LM Studio's embedding service, search results are more precise. For example, a search for "Python" returns all memories related to Python, not just exact matches.
caveat
- Ensure that the network connection is stable to access the LM Studio Embedded Services (default)
http://localhost:1234/v1/embeddings
). - Backup SQLite database files regularly to avoid data loss.
- If using VS Code, it is recommended to check that the Python environment is compatible with the project dependencies.
application scenario
- Developers debugging AI assistants
Developers can use Persistent AI Memory to store interaction data of AI assistants, analyze conversation patterns, and optimize model performance. For example, when debugging a chatbot, retrieve historical conversations to improve response logic. - Researchers analyze language models
Researchers can use semantic search and tool invocation logs to analyze the behavior of language models in specific tasks and generate experimental data. For example, to study the frequency of tool use by AI in programming tasks. - Personal knowledge management
Users can store study notes or inspirations as memories and quickly retrieve them through semantic search. For example, a student can store course notes and search for "machine learning" to find relevant content. - VS Code Efficiency Improvements
Programmers can store code-related knowledge into the system and combine it with Copilot to improve the contextual accuracy of code completion. For example, store project API documentation for quick access to relevant function descriptions.
QA
- How do you ensure data privacy?
Data is stored in a local SQLite database, eliminating the need for internet uploads and safeguarding privacy. Users can customize the database path to further control data access. - What AI models are supported?
The system is compatible with models that support the MCP protocol, such as Claude, GPT, Llama, etc., and can also be integrated with Copilot via VS Code. - What if the installation fails?
Check that the Python version (3.8+ recommended) and pip dependencies are complete. Refer to the issue page of the GitHub repository, or try installing it manually. - How accurate is semantic search?
Relies on LM Studio's embedding model for higher accuracy. Users can adjust the embedding service URL to use a more robust model.