Overseas access: www.kdjingpai.com
Bookmark Us

ProxyLLM is an open source Electron application that runs on the user's computer. Its core logic is very straightforward: it opens the official web pages of ChatGPT, Claude, Gemini, etc. through a built-in browser window, and listens in real-time to and captures the network packets (e.g., cookies, session IDs, and authorization headers) that are generated by the user's interactions with the web pages. It then starts a local server that converts the communication protocols of these web pages into a standard OpenAI-compliant API format in real-time. This means that any third-party software that supports the OpenAI interface (e.g., code editor plug-ins, immersive translation plug-ins, or command-line tools) can “borrow” your account privileges from the web version you're logged into to invoke the big-modeling capabilities directly through ProxyLLM, at no additional API cost. The tool runs completely localized and data is transferred directly between your computer and the model provider, ensuring privacy and security.

Function List

  • Web Session Transfer API: Automatically captures the LLM web session (Session) in the browser and encapsulates it into a standardized POST /v1/chat/completions Interface.
  • Multi-model support: Support for intercepting and converting the protocols of many major AI websites, including but not limited to OpenAI (ChatGPT), Anthropic (Claude), Google Gemini and Qwen (lit. ten thousand questions on general principles (idiom); fig. a long list of questions and answers)。
  • Visualization Control Panel: Provides a graphical interface to manage different AI sites, supporting one-click opening and refreshing of browser windows, as well as intuitive viewing and selection of captured credentials.
  • Claude Code Deep Integration: Specialized for Claude Code Command line tools optimized to support taking over and restoring proxy settings for the Claude CLI, making it possible to leverage the capabilities of the web version of Claude directly.
  • Request Sniffing and Debugging: A built-in request inspector allows users to view details of captured HTTP and WebSocket requests, making it easy for developers to debug or confirm credential validity.
  • Local Privacy Protection: All site credentials, logs, and interaction data are stored only on the user's local computer, logs are provided with desensitization, and no data is uploaded to third-party servers.
  • Customized Adapters: Provides an adapter system that allows developers to write conversion rules for private modeling protocols that are not OpenAI standard.

Using Help

ProxyLLM is a developer tool that needs to be built to run via source code, here is a detailed installation and usage process to help you build your own AI API gateway on your local computer.

1. Environmental preparation

Before you begin, make sure you have a Node.js environment installed on your computer (v16 or higher recommended). You can do this by typing in the Terminal (Terminal or CMD) node -v 和 npm -v to check if the installation was successful.

2. Obtaining source code and installing dependencies

First, you need to download the project's source code locally.
Open a terminal and run the following command to clone the repository:

git clone https://github.com/zhalice2011/ProxyLLM.git

Go to the project catalog:

cd ProxyLLM

Next, install the dependency packages required by the project. Since the project contains a front-end renderer and a main process, it is recommended to install them separately:

# 安装根目录依赖
npm install
# 安装渲染进程依赖
npm --prefix renderer install

3. Building and launching applications

Once the dependencies are installed, you need to compile the front-end UI and start the Electron application:

# 构建 UI 界面
npm --prefix renderer run build
# 构建主程序
npm run build
# 启动应用程序
npm run start

After a successful launch, you will see an application window named ProxyLLM pop up, and the terminal will show that the local API service has been launched on the default port (usually the 127.0.0.1:8080) Launch.

4. Configuring and capturing sessions

  1. Add site: In the ProxyLLM application interface, click “Add Site”. Enter the URL of the AI service you want to use (e.g. https://claude.ai 或 https://chatgpt.com)。
  2. Login account: Click on the “Open” button in the list and the app will bring up a separate browser window. In this window, log into your AI account as usual.
  3. capture voucher: After a successful login, send a random test message (e.g. “Hello”) in the web page. ProxyLLM will automatically capture the request header and authentication information of this interaction in the background.
  4. Select Voucher: Go back to the control panel of the main ProxyLLM interface, click on the “Requests” or “Credentials” option under the site, and select one of the latest captured valid request records as the current API credentials.

5. Calling the API

You now have an OpenAI-compatible API service running on your local computer. You can configure any third-party tool to connect to this address.

Configuration example (to use the curl (test as an example):

curl http://127.0.0.1:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer dummy-token" \
-d '{
"model": "claude-3-opus-20240229",
"messages": [{"role": "user", "content": "你好,请介绍一下你自己"}]
}'

Note: Since authentication is handled by ProxyLLM's internal agent, the Authorization Bearer token here can usually be filled in at will, unless you have specific security settings turned on.

6. Integration of Claude Code (special features)

If you are a developer and use Anthropic released claude command line tool, ProxyLLM provides a one-click takeover:

  1. Find the “Claude Code” setting in the ProxyLLM interface.
  2. Click “Takeover” and the tool will automatically modify the local configuration to direct traffic from the Claude CLI to ProxyLLM.
  3. Now you run in the terminal claude command, what is actually used is the session quota for your web version of Claude, not the API paid quota.

application scenario

  1. Zero-cost use of programming aids
    Many IDE plugins (e.g. Cursor, various AI plugins for VS Code) require the user to fill in the OpenAI API Key. using ProxyLLM, you can change the API address to a local address (http://127.0.0.1:8080/v1), thus utilizing your paid ChatGPT Plus or free Claude Web Edition account to drive these plugins without having to press the Token Pay extra.
  2. Bypassing API Access Restrictions
    Some organizations or regional network environments may have limited direct access to the OpenAI API, but may be able to access the web version through a browser; ProxyLLM acts as a native middleware that allows legacy software or scripts that don't support web login to work by “masquerading” as browser traffic.
  3. Developing and testing AI applications
    When building LLM-based applications, developers consume a lot of tokens during the development and testing phase, and by forwarding requests to the web version through ProxyLLM (which usually has a more generous usage quota), the API cost during the development and testing phase can be significantly reduced.
  4. Unified management of multi-model dialogs
    For users with multiple accounts on different platforms (e.g. both Gemini Advanced and ChatGPT Plus), a unified chat client (e.g. Chatbox) that supports the OpenAI format can be used to unify all the different web services in the background into a single interface for management and conversation through ProxyLLM.

QA

  1. Will using this tool result in an account ban?
    There are certain risks. Although ProxyLLM tries to simulate the request behavior of real browsers, high frequency automated API calls (especially concurrent requests that exceed normal human reading speeds) may trigger the service provider's risk control mechanisms. It is recommended that ProxyLLM be used only for personal assistance and not for large-scale commercial services.
  2. Does it support all AI sites?
    Not all sites are supported. It mainly supports websites with built-in adapters (e.g. OpenAI, Anthropic, Gemini, etc.). For unadapted websites, ProxyLLM cannot automatically resolve their specific communication protocols, and users may need to write their own adaptation scripts.
  3. Are my chats safe?
    ProxyLLM is locally run software, all traffic capture and forwarding is done inside your computer (Localhost) and does not go through the author's servers. However, please note that your chats are still sent to the servers of the corresponding AI service provider (e.g. OpenAI).
  4. Why can't I capture the request?
    Please make sure that you complete the full “Send Message - Receive Reply” process in the dedicated browser window that ProxyLLM pops up. Simply logging in is not enough, the software needs to analyze the actual conversation's WebSocket or HTTP packets to extract contextual information.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top