Overseas access: www.kdjingpai.com
Ctrl + D Favorites

geminicli2api is an open source FastAPI-based proxy server hosted on GitHub. It converts the functionality of the Google Gemini CLI into an OpenAI-compatible API interface, while supporting native Gemini API endpoints. Developers can call Google Gemini models via the familiar OpenAI API format or the direct Gemini API, free of charge using the API quota provided by Google. The project supports text generation, multimodal inputs (e.g., text and images), real-time streaming responses, and more. It offers a variety of authentication methods, suitable for rapid deployment to the local or cloud, widely used in development, test and production environments. geminicli2api is designed to be lightweight, easy to configure, and is particularly suitable for developers who need to integrate Gemini capabilities into their existing workflows.

Function List

  • Provides OpenAI-compatible /v1/chat/completions cap (a poem) /v1/models endpoints, adapting existing OpenAI tools.
  • Support for native Gemini API endpoints such as /v1beta/models/{model}:generateContentThe Gemini model can be called directly from the Gemini model.
  • Supports real-time streaming responses, suitable for interactive dialog or long text generation.
  • Supports multimodal input and handles mixed content such as text and images.
  • Provides Google search enhancements through -search Model variants provide more accurate responses.
  • Support for controlling Gemini's reasoning process using the -nothinking cap (a poem) -maxthinking Model variants adjust the depth of reasoning.
  • Supports a variety of authentication methods, including Bearer tokens, Basic authentication, API keys and so on.
  • Supports Docker containerized deployments and is compatible with Hugging Face Spaces.

Using Help

Installation process

geminicli2api is simple to install and configure, and supports both locally running and containerized deployments. Below are the detailed steps:

  1. clone warehouse
    Clone the geminicli2api repository locally using the following command:

    git clone https://github.com/gzzhongqi/geminicli2api
    cd geminicli2api
    
  2. Installation of dependencies
    The project is based on Python and FastAPI, with dependencies listed in the requirements.txt in. Run the following command to install:

    pip install -r requirements.txt
    
  3. Configuring Environment Variables
    geminicli2api needs to be configured with authentication-related environment variables. Create a .env file, add the following:

    GEMINI_AUTH_PASSWORD=你的认证密码
    GEMINI_CREDENTIALS={"client_id":"你的客户端ID","client_secret":"你的客户端密钥","token":"你的访问令牌","refresh_token":"你的刷新令牌","scopes":["https://www.googleapis.com/auth/cloud-platform"],"token_uri":"https://oauth2.googleapis.com/token"}
    PORT=8888
    
    • GEMINI_AUTH_PASSWORD: Authentication password for API access, required.
    • GEMINI_CREDENTIALS: A JSON string of Google OAuth credentials, containing the client_id,client_secret etc. fields.
    • Optional Variables:
      • GOOGLE_APPLICATION_CREDENTIALS: Path to the Google OAuth credentials file.
      • GOOGLE_CLOUD_PROJECT maybe GEMINI_PROJECT_ID: Google Cloud Project ID.
    • If you are using a credentials file, create credentials directory in your Google Cloud Services account's .json file and set the GOOGLE_APPLICATION_CREDENTIALS Points to the file path.
  4. local operation
    After the configuration is complete, run the following command to start the service:

    python -m uvicorn app.main:app --host 0.0.0.0 --port 8888
    

    Service Default Listening http://localhost:8888The

  5. Docker Deployment
    geminicli2api supports Docker containerized deployment and simplifies environment configuration.

    • Build the mirror image:
      docker build -t geminicli2api .
      
    • Run the container (default port 8888):
      docker run -p 8888:8888 \
      -e GEMINI_AUTH_PASSWORD=your_password \
      -e GEMINI_CREDENTIALS='{"client_id":"...","token":"..."}' \
      -e PORT=8888 \
      geminicli2api
      
    • Use Docker Compose:
      docker-compose up -d
      

      For Hugging Face Spaces deployments, use port 7860:

      docker-compose --profile hf up -d geminicli2api-hf
      
  6. Hugging Face Spaces Deployment
    • Log in to Hugging Face and create a new Docker Space.
    • Upload the contents of the repository to Space.
    • Add an environment variable to the Space settings:GEMINI_AUTH_PASSWORD cap (a poem) GEMINI_CREDENTIALS(or other source of credentials).
    • Space automatically builds and deploys the service, listens for http://<space-url>:7860The

Using the API

geminicli2api provides OpenAI-compatible and native Gemini API endpoints that developers can choose from according to their needs.

OpenAI Compatible API

Call geminicli2api using the OpenAI client library with endpoints consistent with the OpenAI API.
typical example(Python):

import openai
client = openai.OpenAI(
base_url="http://localhost:8888/v1",
api_key="your_password"  # GEMINI_AUTH_PASSWORD
)
response = client.chat.completions.create(
model="gemini-2.5-pro-maxthinking",
messages=[{"role": "user", "content": "用简单语言解释相对论"}],
stream=True
)
for chunk in response:
if chunk.choices[0].delta.reasoning_content:
print(f"推理过程: {chunk.choices[0].delta.reasoning_content}")
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")

Native Gemini API

Direct calls to Gemini API endpoints support more flexible configurations.
typical example(Python):

import requests
headers = {
"Authorization": "Bearer your_password",
"Content-Type": "application/json"
}
data = {
"contents": [
{"role": "user", "parts": [{"text": "用简单语言解释相对论"}]}
],
"thinkingConfig": {"thinkingBudget": 32768, "includeThoughts": True}
}
response = requests.post(
"http://localhost:8888/v1beta/models/gemini-2.5-pro:generateContent",
headers=headers,
json=data
)
print(response.json())

Multi-modal inputs

Supports uploading text and images to be sent to /v1/chat/completions maybe /v1beta/models/{model}:generateContentThe
typical example(Upload images and text):

curl -X POST http://localhost:8888/v1/chat/completions \
-H "Authorization: Bearer your_password" \
-H "Content-Type: application/json" \
-d '{"model": "gemini-2.5-pro", "messages": [{"role": "user", "content": "分析这张图片并描述内容"}], "files": ["./image.jpg"]}'

Authentication Methods

The following authentication methods are supported:

  • Bearer Token::Authorization: Bearer your_password
  • Basic Certification::Authorization: Basic base64(username:your_password)
  • Query parameters::?key=your_password
  • Google Head::x-goog-api-key: your_password

Supported Models

  • Base model:gemini-2.5-pro,gemini-2.5-flash,gemini-1.5-pro,gemini-1.5-flash,gemini-1.0-pro
  • Variants:
    • -search: Enable Google search enhancements (e.g. gemini-2.5-pro-search).
    • -nothinking: Reduce the number of inference steps (e.g. gemini-2.5-flash-nothinking).
    • -maxthinking: Increase the budget for reasoning (e.g. gemini-2.5-pro-maxthinking).

caveat

  • assure GEMINI_AUTH_PASSWORD is set, otherwise the API request will fail.
  • Google OAuth credentials need to be valid and are recommended to be obtained from the Google Cloud console.
  • Streaming responses require client support for chunked data processing.
  • Check Google Cloud project quotas to avoid exceeding API call limits.

application scenario

  1. Integration with existing OpenAI tools
    Developers use geminicli2api to plug Gemini models into tools based on the OpenAI API (such as LangChain) and quickly switch to Gemini's free quota without modifying code.
  2. Multimodal Content Generation
    Content creators upload images and text to generate descriptive, analytical, or creative content suitable for advertising design or educational material production.
  3. Automated workflows
    Organizations automate the processing of documents, generate reports, or answer customer inquiries with geminicli2api to improve operational efficiency.

QA

  1. What authentication methods does geminicli2api support?
    Supports Bearer tokens, Basic authentication, query parameters, and Google header authentication, which requires setting up the GEMINI_AUTH_PASSWORDThe
  2. How do I get Google OAuth credentials?
    Create a service account in the Google Cloud console, download the JSON key file, fill in the GEMINI_CREDENTIALS or set GOOGLE_APPLICATION_CREDENTIALS Path.
  3. What model variants are supported?
    be in favor of -search(Google Search Enhanced),-nothinking(Reduced reasoning),-maxthinking(Increased reasoning) variant for the gemini-2.5-pro cap (a poem) gemini-2.5-flashThe
  4. How to deploy in Hugging Face Spaces?
    Fork the repository, create a Docker Space, set up the GEMINI_AUTH_PASSWORD cap (a poem) GEMINI_CREDENTIALSSpace will be deployed automatically.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

inbox

Contact Us

Top

en_USEnglish