Overseas access: www.kdjingpai.com
Bookmark Us

AIClient-2-API is a lightweight API proxy tool for developers that centers on theSimulate the authorization and request process for multiple AI clientsThe project unifies those large model services that were originally limited to client-side use, such as Google's Gemini CLI, Ali's Qwen Code Plus, and Kiro's built-in Claude model on the client side, into a local interface that is compatible with the OpenAI API format through technical means. The purpose of this encapsulation is to provide a unified access point. This means that developers can dock various applications or tools (such as LobeChat, NextChat, etc.) that originally relied on the OpenAI interface directly to the AIClient-2-API. This allows developers to seamlessly use the many different big models supported behind the scenes without the need for complex code adaptations. The project is built using Node.js, supports Docker for rapid deployment, and provides advanced features such as account pool management, failover, and logging to improve the stability of the service and flexibility of use.

Function List

  • Simulate client authorizations and requests: Core functionality, by simulating the OAuth authorization process and request format of Gemini CLI, Kiro and other clients to call the model services behind them, breaking through the limitations of the official API.
  • Unified API interface: Encapsulate all supported models into a standard OpenAI API format to realize the purpose of calling multiple models from one interface.
  • Compatible with the OpenAI ecosystem: Any client and toolchain that supports the OpenAI API can be accessed at zero cost by directly using the interface address provided by this project.
  • Breaking through usage limitations: Using simulated client-side authorization, it is possible to get a higher request quota and usage frequency than the official regular free API.
  • Free model utilization: Support for free calls to Kiro's built-in Claude Sonnet 4 model by emulating its client API.
  • Account Pool Management: Supports configuration of multiple accounts for each model service provider, realizing automatic polling, failover and configuration degradation of requests to improve service stability.
  • Request monitoring and auditing: A built-in logging feature that records the contents of all requests and responses through the agent for debugging, auditing, or building private datasets.
  • Dynamic cue word management: Allow users to flexibly control the behavior of system prompts through configuration files, either by forcing them to override client-side prompts or by appending content to the end.
  • Easy to expand: The project is modularized, providing a clear and simple implementation path for developers to add new model service providers.
  • Containerized Deployment: Provides full Docker support, allowing users to rapidly deploy and isolate runtime environments through Docker images.

Using Help

The AIClient-2-API project aims to simplify the development process by invoking different big models through a unified interface. The following are detailed installation and usage instructions.

environmental preparation

Before you begin, make sure you have the following software installed on your computer:

  • Node.js: The project runs on Node.js.
  • Git: Used to clone project code from GitHub.
  • Docker (recommended): Docker is not required, but it is officially recommended for deployment as it simplifies environment configuration and dependency management.

Installation and startup

Method 1: Deploy with Docker (recommended)

Docker is the official recommended deployment method that provides a clean, isolated runtime environment.

  1. Pulling a Docker image:
    Open your terminal (CMD or PowerShell on Windows, Terminal on macOS or Linux) and execute the following command to pull the latest image from Docker Hub:

    docker pull justlovemaki/aiclient-2-api:latest
    
  2. Running containers:
    Execute the following command to start the container. This command starts the container's3000port mapping to your local3000Ports.

    docker run -d -p 3000:3000 --name aiclient2api justlovemaki/aiclient-2-api:latest
    

    You can do this by modifying the-pparameter to change the port number mapped to the local machine, for example -p 8080:3000 will deploy the service locally on port 8080.

Method 2: Manually download and run

If you don't want to use Docker, you can also run it directly on your computer.

  1. Cloning Codebase:
    Open a terminal, go to the folder where you wish to store your project, and execute the following command:

    git clone https://github.com/justlovemaki/AIClient-2-API.git
    
  2. Go to the project directory:
    cd AIClient-2-API
    
  3. Installation of dependencies:
    utilizationnpmPackage Manager installs the dependency libraries required by the project.

    npm install
    
  4. Starting services:
    After the installation is complete, execute the following command to start the service.

    node src/api-server.js
    

    By default, the service is started in thelocalhost(used form a nominal expression)3000Ports.

Core Configuration and Usage

After starting the service, you need to configure it into your AI client application.

  1. Get Interface Address:
    • If you are running directly through Docker or locally and have not modified the port, then your API interface address is:http://localhost:3000The
    • If you are deploying the service on another server, set thelocalhostReplace with the IP address of the server.
  2. Configuring the Client:
    打开你常用的AI客户端(如LobeChat, NextChat, AMA, GPT-S等),找到设置API接口地址的地方。通常这个设置项被称为API Endpoint,Base URLmaybeAPI基地址The

    • Fill in the interface address as http://localhost:3000The
    • In the API Key field, fill in the key you set in the startup parameters, the default is 123456The
    • After saving the settings, all requests from the client are sent to the AIClient-2-API proxy service.
  3. Switching and using different models:
    The AIClient-2-API invokes the corresponding model services through different paths. Example:

    • Calling the Kiro-authenticated Claude model: the http://localhost:3000/claude-kiro-oauth
    • Calling the Gemini model: The http://localhost:3000/gemini-cli-oauth
    • Calling custom OpenAI-compatible models. http://localhost:3000/openai-custom
      You can select the model directly in the client, or specify the full URL in a tool that supports path switching.

License File Configuration (key step in simulating the client)

To use models like Gemini, Kiro, Qwen, etc. that require OAuth authorization, you need to get the appropriate authorization file first. This is the projectAnalog ClientThe key to the realization of the function.

  1. Obtaining authorization documents:
    • Gemini: You need to run the official Gemini CLI tool and authorize it first. After authorization, it will be added to the user's home directory in the ~/.gemini/ Folder Generation oauth_creds.json Documentation.
    • Kiro: You need to download its client and log in to authorize it, after which it will be ~/.aws/sso/cache/ Catalog Generation kiro-auth-token.json Documentation.
    • Qwen: The browser will automatically open for authorization when used, and when it's done, it will open in the ~/.qwen/ Catalog Generation oauth_creds.json Documentation.
  2. Provision of services to agents:
    After getting the authorization files, you need to tell the proxy service where to find them. You can specify the path to the authorization files via the project startup parameter, for example when using Gemini, you can start the service like this:

    node src/api-server.js --model-provider gemini-cli-oauth --gemini-oauth-creds-file /path/to/your/oauth_creds.json
    

Proxy Settings

If your operating environment does not have direct access to services such as Google, OpenAI, etc., you will need to set up an HTTP proxy for the endpoint.

  • Linux / macOS:
    export HTTP_PROXY="http://your_proxy_address:port"
    
  • Windows (CMD):
    set HTTP_PROXY=http://your_proxy_address:port
    

Please note that only the HTTP proxy should be set, not the HTTPS proxy.

application scenario

  1. Unified Development Environment (UDE)
    AIClient-2-API provides a perfect solution for developers who need to test, compare and choose between multiple models. With "Simulated Requests" and "Unified Interface", developers don't need to maintain a set of independent API calling code for each model, just point all requests to the local proxy, and then call different back-end models through simple parameter switching, which greatly simplifies the development and debugging process. This greatly simplifies the development and debugging process.
  2. Empowering existing applications
    If you already have an application built on the OpenAI API, but would like to bring in more diverse models (e.g. to take advantage of Gemini's free credits, or Claude's text-processing capabilities), the AIClient-2-API can be accessed seamlessly as a middle layer. Instead of modifying a lot of code in an existing application, simply point the API request address to this proxy and you can immediately use all supported models.
  3. Personal study and research
    For AI enthusiasts and researchers, the high cost of official APIs and calling restrictions are a barrier. The project provides a low-cost platform for individuals to learn and conduct small-scale experiments by simulating client-side licenses and other means, allowing users to take advantage of free call credits for models such as Gemini, or use Kiro's built-in free Claude model.
  4. Building Auditable AI Applications
    A powerful logging feature in the project allows for a complete record of all requests (Prompts) and model responses that pass through the agent. This is critical for scenarios that require content auditing, data analysis, or behavioral monitoring. Enterprises can use this feature to ensure that AI applications are being used in accordance with internal specifications, or to collect data for subsequent fine-tuning and optimization of models.

QA

  1. Is this program secure? Will my license file and key be compromised?
    AIClient-2-API is a proxy server that runs locally and does not collect or upload any of your data and keys. However, you need to keep your license file and API key set in the startup parameters safe to avoid using it in an unsecured network environment.
  2. Why can't I access Google or OpenAI services?
    If your server or local environment is located in a region where these services cannot be accessed directly, you will need to set up an HTTP proxy in the terminal where you are running this project. Please refer to the "Proxy Settings" section of the documentation for details.
  3. Does using the Gemini CLI mode mean that I can actually make unlimited calls per day?
    The project bypasses the official rate limitation of the normal free API by emulating the Gemini CLI authorization to get a higher request amount, but it does not mean that there is no limitation at all. The official use of CLI tools will still have a certain frequency and dosage monitoring, excessive abuse may still lead to account restrictions.
  4. Do I need to install the emulated client software beforehand?
    You do not need to install the full client software, but you mustRun the authorization process for these client software onceto get their locally generated OAuth authorization files (e.g. oauth_creds.json maybe kiro-auth-token.json). It is by reading these files that the AIClient-2-API completes the authorization step of the simulated client.
  5. Is there a fee for this program?
    The AIClient-2-API project itself is open source and free, following the GPLv3 open source license. It does not provide any AI modeling services, just an API proxy tool. You call the back-end model incurred costs (if beyond the free amount) need to be charged by the appropriate third-party service provider.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top

en_USEnglish