ZenMux is an AI orchestration development tool with the Zen MCP (Model Context Protocol) server at its core. It allows developers to connect and collaborate a primary AI (e.g. Claude) with multiple other top AI models (including Gemini, OpenAI O3, etc.). This tool enhances the ability to analyze code, solve complex problems, and perform collaborative development by intelligently assigning tasks to the most appropriate AI models. It allows users to have different AI models review and analyze the same problem from their own "professional" perspectives, capturing details and errors that a single model might miss. The tool not only supports mainstream large models in the cloud, but is also compatible with local models running on platforms such as Ollama, providing developers with the flexibility to balance performance, privacy and cost-effectiveness.
Function List
- Multi-model AI orchestration: Support for co-scheduling multiple AI models such as Claude, Gemini, OpenAI, etc. in the same session, and letting the master model (e.g., Claude) dominate the control flow and invoke the other models for feedback and solutions.
- Intelligent Tasking: The system can automatically select the most appropriate AI model for a specific sub-task, such as code analysis, performance optimization, or security review, based on the nature of the task.
- contextual continuity:: The ability to maintain the context of conversations and tasks when switching invocations between multiple models, ensuring a coherent collaborative process.
- Professional development toolset: A wide range of specialized tools for developers are built in, including collaborative thinking for the
chat
deep inferencethinkdeep
, code review of thecodereview
and advanced debugging of thedebug
etc. - Local Model Support: Allows connection to and use of AI models running locally through services such as Ollama, vLLM, etc., meeting user needs for data privacy and cost control.
- Intelligent Document Processing: Ability to automatically handle files and directories in the codebase and intelligently manage Token limits based on the context window size of different AI models.
- scalability: The platform is designed to be extensible, allowing users to create and integrate customized tools based on their workflow.
Using Help
ZenMux的核心是其后台服务Zen MCP Server,它作为一个连接不同AI模型的桥梁,需要先进行安装和配置。以下是详细的安装和使用流程:
environmental preparation
Before you begin, make sure you have the following software installed on your computer:
- Python 3.11 or later.
- Git version control tool.
- Docker (recommended, simplifies environment deployment).
Installation process
Way 1: Use NPX wrapper (recommended, easiest)
The NPX method automates most of the setup steps and is ideal for first-time users.
- first run:
Open your terminal (command line tool) and execute the following command:npx zen-mcp-server
- automatic installation:
When run for the first time, the command automatically performs the following actions:- Check that the Python version meets the requirements.
- Clone the Zen MCP server source code from GitHub to a local user directory (usually the
~/.zen-mcp-server
). - Create a
.env
The configuration file and prompts you to enter the API keys for each AI service provider (e.g. OpenAI API Key, Gemini API Key, etc.). This is a required step as ZenMux needs these keys to invoke the corresponding AI models. - Automatically sets up the Python virtual environment and installs all the required dependency libraries.
Mode 2: Manual installation
If you wish to have more control over the installation process, you can choose to install manually.
- Cloning Codebase:
Open a terminal and use Git to clone the project source code to your computer.git clone https://github.com/BeehiveInnovations/zen-mcp-server.git
- Configuring the API Key:
Go to the project directory and copy or rename.env.example
file is.env
. Then, use a text editor to open the.env
file, fill in the API key you got from OpenAI, Google, etc.OPENAI_API_KEY="sk-..." GEMINI_API_KEY="..." OPENROUTER_API_KEY="..." ```3. **安装依赖**: 建议在Python虚拟环境中安装,以避免与其他项目产生冲突。 ```bash cd zen-mcp-server python3 -m venv venv source venv/bin/activate # 在Windows上使用 `venv\Scripts\activate` pip install -r requirements.txt
- Operations Server:
After the installation is complete, execute the following command to start the server:python main.py
Configuring the Client
Once the server is started, you need to add ZenMux as an available MCP server in your AI programming client (e.g. Claude Code).
- In the client's configuration, add a new MCP server.
- The server address is usually a local address, for example
http://127.0.0.1:8000
The - Once added, the client recognizes and connects to your Zen MCP server to gain the ability to invoke multiple AI models.
Core Function Operation
After a successful connection, you will be able to use the various features of ZenMux via natural language commands in the AI Programming Client.ZenMux will disassemble your task and call the most appropriate model to complete it.
- Conduct code reviews:
You can send commands to the AI assistant, such as, "Use thecodereview
The tool reviews my currently open files to check for potential security vulnerabilities and performance issues." ZenMux invokes the appropriate model to perform professional code analysis. - Multi-model discussion:
When you have a complex problem, you can have multiple AI models "brainstorm" together. For example, "Help me analyze the refactoring options for this module, have Gemini provide suggestions for performance optimization, and have Claude give some feedback from a code readability perspective." - debugging code:
When encountering a bug that is difficult to locate, you can use thedebug
Tool. Example command: "This piece of my code makes an error when dealing with boundary conditions using thedebug
The tools helped me analyze the causes and find solutions." - Deep thinking and planning:
For architectural design or project planning tasks that require deep thinking, you can use thethinkdeep
maybeplanner
Tool. For example, "I need to design the database architecture for a new e-commerce website using thethinkdeep
tools, a combination of scalability and cost, and give me a detailed program."
application scenario
- Complex code base analysis
Developers can take advantage of ZenMux's multi-model capabilities when they need to understand a large and complex code base. For example, letting one model (e.g., Gemini 1.5 Pro) take advantage of its long context window to read through the entire codebase while letting another model (e.g., Claude 3 Opus) take care of the logical reasoning and summarizing architectural patterns quickly builds a comprehensive understanding of the project. - Professional code review and refactoring
Before a team commits code, a multi-dimensional automated code review can be performed using ZenMux. One AI model can be made to focus on finding security vulnerabilities, another to check for compliance with coding specifications, and a third to provide performance optimization recommendations. This can greatly improve code quality and reduce the burden of manual review. - Cross-cutting problem solving
When developing projects that involve multiple areas of expertise (e.g., applications that combine data science, back-end development, and front-end visualization), developers can instruct ZenMux to invoke specific models that perform better in different domains, deal with the problems in different domains separately, and finally integrate the results into a comprehensive solution. - Cross validation of key decisions
When faced with an important technology selection or architectural decision, developers can ask multiple AI models to make separate recommendations and rationales, and then compare and challenge these different "opinions". This approach helps developers identify blind spots in their thinking and make more robust decisions.
QA
- Is ZenMux free?
Zen MCP Server itself is an open source project that is free to download and use. However, it needs to call third-party commercial AI models (e.g., OpenAI's GPT series or Google's Gemini) when it works, and these services are usually charged on a per-use basis. Therefore, you need to pay for the use of these third-party APIs. - Do I need an API key for every AI model?
Yes. You'll need to be in the.env
The configuration file provides a valid API key for each AI platform you wish to use. If you only want to use some of the models, you can configure only the appropriate keys.ZenMux also supports connecting to locally running open source models, in which case commercial API keys are not required. - What kind of developer is this tool for?
ZenMux is best suited for developers who want to use AI deeply in their day-to-day programming work to improve efficiency. In particular, this tool is valuable for developers who need to work on complex projects, perform code reviews, debug difficult problems, or want to leverage multiple AI perspectives to optimize their solutions. - How is ZenMux different from using ChatGPT or Claude directly?
When using a single AI model directly, you can only get "one view" of the model. The core advantage of ZenMux is "orchestration" and "collaboration", which allows multiple AI models to work together as a team, complementing and validating each other's outputs. This provides more comprehensive and reliable results when dealing with complex problems.