LLM Configuration Method
OxyGent requires an HTTP API connection to the Large Language Model (LLM) to run the SmartBody system and supports two configuration methods:
Environment variable configuration
- Configured via .env file:
echo 'DEFAULT_LLM_API_KEY="your_api_key"' > .env
echo 'DEFAULT_LLM_BASE_URL="your_base_url"' >> .env
echo 'DEFAULT_LLM_MODEL_NAME="your_model_name"' >> .env - Or just set the environment variable:
export DEFAULT_LLM_API_KEY="your_api_key"
export DEFAULT_LLM_BASE_URL="your_base_url"
export DEFAULT_LLM_MODEL_NAME="your_model_name"
Runtime Configuration
Specified in the Python script via the Config object:
Config.set_agent_llm_model("default_llm")- Fine control of llm_params parameters (e.g., temperature)
- Supports setting concurrent semaphore to control the request rate.
Scope of model support
The framework is compatible with any LLM service that provides an HTTP API, including but not limited to:
- OpenAI family of models (GPT, etc.)
- Claude Series Models
- Open source models such as LLaMA's API services
This answer comes from the articleOxyGent: Python open source framework for rapidly building intelligent systemsThe































