Solution: Unified management of multi-model APIs using easy-llm-cli
Developers usually face the following pain points when calling different LLMs in local environments: the need to memorize the differentiated API formats of various platforms, frequent code changes to switch models, the need to deal with multimodal inputs individually, and so on.
The problem can be solved in three steps with easy-llm-cli:
- Unified Installation Management: Install globally via npm
npm install -g easy-llm-cli
After that, all model calls are made through the standardizedelc
Command completion - Environment variable configuration: Set the four core variables in the shell configuration file (.bashrc/.zshrc):
export CUSTOM_LLM_PROVIDER=XXX
(e.g. openai/claude)
export CUSTOM_LLM_API_KEY=XXX
export CUSTOM_LLM_ENDPOINT=XXX
export CUSTOM_LLM_MODEL_NAME=XXX
- Dynamic switching model::
- Temporary switching: variable definitions directly preceded by commands
CUSTOM_LLM_PROVIDER=openai elc "分析代码"
- Persistent configuration: restart the terminal after modifying environment variables
This method has three major advantages over native API calls: no need to rewrite business logic code, support for command line pipeline operations, and automatic handling of return format differences across platforms. It is measured to reduce the model switching cost of 80%.
This answer comes from the articleeasy-llm-cli: enable Gemini CLI support for calling multiple large language modelsThe