Overseas access: www.kdjingpai.com
Ctrl + D Favorites

Gemini-CLI-2-API is an open source project, the core function of Google's Gemini The CLI tool is packaged as a native API service compatible with the OpenAI API. It is based on the Gemini 2.5 Pro model and supports developers to invoke Gemini's AI capabilities through the standard OpenAI interface without modifying the existing tool chain. The project provides 1000 free requests per day, supports streaming response, multiple authentication methods and detailed logging. The code is open source, licensed under the GNU General Public License v3, and easily extensible for developers who need local AI services or high-frequency calls.

Function List

  • commander-in-chief (military) Gemini CLI Wrapped in an OpenAI API-compatible interface that supports /v1/models cap (a poem) /v1/chat/completions Endpoints.
  • Automatically convert OpenAI-formatted requests and responses to Gemini format.
  • Supports Gemini 2.5 Pro models and offers 1000 free requests per day.
  • Streaming is provided, and responses are output in real time with a typewriter effect.
  • Includes a logging system to record request prompt words, timestamps, and token expiration dates.
  • A variety of authentication methods are supported, including Bearer tokens, URL query parameters, and x-goog-api-keyThe
  • The listening address, port, API key and logging mode can be configured from the command line.
  • Supports automatic renewal of OAuth tokens to simplify the authentication process.
  • Modular code structure to support secondary development, such as adding caching or filtering features.

Using Help

Installation process

Gemini-CLI-2-API needs to run in a Node.js environment. The following are the detailed installation and configuration steps:

  1. Installing Node.js
    Make sure Node.js is installed on your system (the latest LTS version is recommended). Download and install Node.js from the official Node.js website. Verify the installation:
node -v
npm -v
  1. Cloning Project Warehouse
    Use Git to clone your project locally:
git clone https://github.com/justlovemaki/Gemini-CLI-2-API.git
cd Gemini-CLI-2-API
  1. Installation of dependencies
    Run the following command in the project root directory:
npm install
  1. Configuring API Keys
    Project Support Google Gemini API key or OAuth authentication:
  • OAuth Authentication: When you run the program for the first time, it opens a browser and directs you to sign in to your Google account, generates an OAuth token, and stores it automatically. The token expires and is automatically renewed.
  • API key: from Google Cloud or Google AI Studio Get the key and set the environment variables:
    export GOOGLE_API_KEY="YOUR_API_KEY"
    export GOOGLE_GENAI_USE_VERTEXAI=true
    

    interchangeability YOUR_API_KEY is the actual key.

  1. Starting services
    Run the following command to start the local API service, which listens on port 8000 by default:
node openai-api-server.js --port 8000 --api-key sk-your-key

transferring entity --port cap (a poem) --api-key Parameters customize the port and key. Example:

node openai-api-server.js --port 8080 --api-key sk-your-key

Usage

Once the service is started, you can interact with Gemini-CLI-2-API via an OpenAI-compatible API endpoint. The following are detailed instructions for doing so:

  1. Send Chat Request
    Project Support /v1/chat/completions endpoints, compatible with OpenAI's request format. Use the curl or other HTTP clients to send requests:
curl http://localhost:8000/v1/chat/completions 
-H "Content-Type: application/json" 
-H "Authorization: Bearer sk-your-key" 
-d '{
"model": "gemini-2.5-pro",
"messages": [
{"role": "system", "content": "你是一个代码助手。"},
{"role": "user", "content": "帮我写一个 Python 函数"}
]
}'

The response is in JSON format and the content is generated by the Gemini 2.5 Pro model.

  1. Using Streaming
    Enable streaming response, set "stream": trueThe results are displayed verbatim in real time:
curl http://localhost:8000/v1/chat/completions 
-H "Content-Type: application/json" 
-H "Authorization: Bearer sk-your-key" 
-d '{
"model": "gemini-2.5-pro",
"stream": true,
"messages": [
{"role": "user", "content": "讲一个关于 AI 的故事"}
]
}'
  1. Querying available models
    utilization /v1/models Endpoints to view supported models:
curl http://localhost:8000/v1/models 
-H "Authorization: Bearer sk-your-key"

Returns a list of currently supported models, such as gemini-2.5-proThe

  1. View Log
    The logging system records the prompt words and timestamps of all requests for easy debugging. Enable log output to file:
node openai-api-server.js --port 8000 --api-key sk-your-key --log file

Log files are stored in the project directory and contain request details and token status.

  1. Integration into existing tools
    Since the API is compatible with the OpenAI format, you can set the service address (e.g. http://localhost:8000/v1) is configured to a tool that supports the OpenAI API (such as LobeChat). Simply set the tool's API address to that of Gemini-CLI-2-API, keeping the request format the same.
  2. Extended Development
    The modular design of the project facilitates expansion. Example:
  • Add Cache: Modification gemini-core.jsIf you want to reduce the number of API calls, you can add Redis or file caching to reduce the number of API calls.
  • Content Filtering: in openai-api-server.js Add keyword filtering logic to review request or response content.

caveat

  • Ensure that your network connection is stable and that OAuth authentication requires access to Google servers.
  • Multi-modal inputs (e.g. images) are not supported at this time, and may be updated in the future.
  • Free for up to 1000 requests per day, subject to Google's terms of use.
  • If authentication fails, check the GOOGLE_API_KEY or rerun the OAuth process.

application scenario

  1. Seamless integration of existing tools
    Developers can plug Gemini-CLI-2-API into tools based on the OpenAI API (e.g., LangChain, AutoGPT) and directly invoke the AI capabilities of Gemini 2.5 Pro without modifying the code.
  2. Local AI Service Deployment
    Enterprises can deploy local API services for privatized AI tasks such as code generation and document summarization, reducing their dependence on cloud services.
  3. Debugging and Optimization Tips
    Logging systems help developers record and analyze cue words to optimize interaction design or build custom data sets.
  4. Learning and experimentation
    Students or researchers can learn API integrations, experiment with the performance of Gemini models, or develop new features through open source code.

QA

  1. Why do I need to be compatible with the OpenAI API?
    The OpenAI API is a standard interface for many AI tools, and the Gemini-CLI-2-API lets developers work with Gemini models without modifying existing code by being compatible with this format.
  2. Is there a fee?
    The program is free and relies on 1000 free requests per day from the Gemini CLI. Higher amounts are available through Google AI Studio Purchase.
  3. How do I handle API key failures?
    Check environment variables or re-run the program to trigger OAuth authentication. Ensure that the Google account has permission to access the Gemini API.
  4. What models are supported?
    Gemini 2.5 Pro is currently supported, with possible extensions for other Gemini models in the future.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

inbox

Contact Us

Top

en_USEnglish