Coze Studio uses an open model integration architecture and currently officially supports the following two types of model services:
- Cloud API Model: commercial models including the Ark series of volcano engines (e.g. doubao-seed-1.6), the OpenAI GPT series, etc.
- localization model: Support access to privately deployed large model services through customized configurations
Configure a specific process for the Volcano Engine model:
- Create an application in the Volcano Engine console to get the API Key and Endpoint ID (account real name authentication required)
- Copy the project's
model_template_ark_doubao-seed-1.6.yamlTemplate files - Modify key fields:
idSet as a globally unique numeric identifier (not modifiable after deployment)meta.conn_config.api_keyFill in the Volcano API keymeta.conn_config.modelFill in the Endpoint ID
- Place the configuration file in the
backend/conf/model/Catalog in effect
Caveats:
- Different models are billed differently and Volcano Engine is billed per call
- Production environments are recommended to configure QPS flow limiting and fusing mechanisms
- Model response latency can be determined by logging in the
latencyField Monitoring
This answer comes from the articleCoze Studio (Coze Open Source Edition): an open source low-code platform for rapidly building AI intelligencesThe































