This article will detail how to use the Docker
cap (a poem) Docker Compose
to deploy, update, and migrate open source LLM application development platforms Dify
(v0.6.14). The content covers advanced operations from rapid deployment to data backup to building images from source code, aiming to provide a clear and practical operation guide for developers and O&M personnel.
Rapid Deployment of Dify with Docker Compose
For most users, using the officially provided Docker Compose
Documentation is deployed Dify
The quickest and most recommended way.
Step 1: Obtain the Dify Deployment File
First, from Dify
Get the official repository of the docker
Deployment Catalog.
Step 2: Configure Environment Variables
Dify
of all configurations are managed through environment variables. Before deployment, the configuration template file .env.example
replicate .env
document and then modify it to suit your actual needs.
# 进入 docker 部署目录
cd dify/docker
# 复制环境变量配置文件
cp .env.example .env
After that, you can use a text editor to open the .env
file, change the database password according to the comments in it,SECRET_KEY
and other key configurations.
Step 3: Start the service
After completing the configuration, execute the following command to start it in the background Dify
of all services.
docker-compose up -d
This command automatically pulls the required Docker
image and start all containers in order.
Updating Dify Services
(coll.) fail (a student) Dify
When releasing a new version, you can follow the steps below to perform a security update.
- Stop and remove old containers:
cd dify/docker docker-compose down
- Backup Data (Important):
It is highly recommended that you back up before performing any updates.docker
Catalog, especially the one in which thevolumes
Subdirectories. This is the key to ensuring data security. - Get the new version of the document:
Download or pull the latest version ofDify
source code with the newdocker
The catalog replaces the old one. - Importing a new version of the image:
If you are in an offline environment, you need to import the new version of the image file in advance. - Updating configuration files:
Compare the old with the new.env
file to migrate the customized configuration to the new.env
Documentation. - Launching the new version of the service:
docker-compose up -d
Data Migration and Backup
Data migration is a common requirement in operations and maintenance.Dify
Data persistence is performed using two approaches: local directory mapping and the Docker
Name the data volumes, so they are migrated differently.
Migrate volume directories that are mapped locally
Dify
(used form a nominal expression) docker-compose.yaml
In the documentation, some services persist data through "local directory mapping". This means that a directory in the container corresponds directly to a directory in the host's ./volumes
Catalog.
Migration this way is very straightforward and is essentially file copying.
- Stop Dify Services
All containers must be stopped before manipulating the file to prevent data write conflicts.docker-compose down
- Packing Volume Catalog
commander-in-chief (military)volumes
directory is packed into a zip file.tar -czvf dify-volumes.tar.gz ./volumes
- Unzip on the new server
After transferring the zip to the target location on the new server, unzip it.tar -xzvf dify-volumes.tar.gz
Migrating Docker Named Data Volumes
with regards to PostgreSQL
and other key services.Dify
adopted Docker
The data is persisted in a "named data volume". Such a volume consists of Docker
Unified management, with its physical files stored in the /var/lib/docker/volumes/
directory, direct copying is relatively cumbersome and may have permissions issues.
in order to oradata
cap (a poem) dify_es01_data
These two data volumes are used as an example, and the following is the recommended migration method.
clarification:: If deployed with docker-compose -p dify
Specify the name of the project.Docker
A prefix is automatically added to the volume name, for example dify_oradata
The
Method 1: Backup and Restore with Temporary Containers (Recommended)
This method does not require root
permissions, and there is no need to care about the specific physical path of the volume on the host, it is a safe and standardized practice.
- Backup Data Volumes
Create a temporaryalpine
container, mount both the data volume to be backed up and a local backup directory, and then execute the pack command inside the container.# 确保当前目录下有 backup 子目录 mkdir -p backup # 备份 oradata 数据卷 docker run --rm -v oradata:/source -v $(pwd)/backup:/backup alpine sh -c "cd /source && tar czf /backup/oradata.tar.gz ." # 备份 dify_es01_data 数据卷 docker run --rm -v dify_es01_data:/source -v $(pwd)/backup:/backup alpine sh -c "cd /source && tar czf /backup/es_data.tar.gz ."
- Transferring Backup Files
commander-in-chief (military)backup
directory of theoradata.tar.gz
cap (a poem)es_data.tar.gz
File transfer to the new server. - Recovering data on a new server
First create a data volume with the same name on the new server and then restore the data into it in a similar way.# 在新服务器上创建空的命名数据卷 docker volume create oradata docker volume create dify_es01_data # 从备份文件恢复数据到 oradata 卷 docker run --rm -v oradata:/target -v /path/to/backup:/backup alpine sh -c "cd /target && tar xzf /backup/oradata.tar.gz" # 从备份文件恢复数据到 dify_es01_data 卷 docker run --rm -v dify_es01_data:/target -v /path/to/backup:/backup alpine sh -c "cd /target && tar xzf /backup/es_data.tar.gz"
Method 2: Directly manipulate the host volume directory (requires root privileges)
If the server that owns the root
permissions, you can also back up the physical directory of the volume directly.
- Finding the physical path to a volume
docker volume inspect oradata dify_es01_data
This command outputs the volume's
Mountpoint
, i.e., the physical storage path. - Direct Packaging
# 获取路径并打包 ORADATA_PATH=$(docker volume inspect -f '{{.Mountpoint}}' oradata) ES_DATA_PATH=$(docker volume inspect -f '{{.Mountpoint}}' dify_es01_data) sudo tar -czf oradata_backup.tar.gz -C $ORADATA_PATH . sudo tar -czf es_data_backup.tar.gz -C $ES_DATA_PATH .
Advanced Operations: Building and Managing Images from Source
In scenarios where secondary development is required, security patches are applied, or offline deployments are made, manual builds from source are required Dify
(used form a nominal expression) Docker
Mirroring.
Building APIs and Web Mirrors
Dify
The core services are divided into api
cap (a poem) web
Two parts that need to be built separately.
- Build the API image (
dify/dify-api
)cd api && docker build . -t dify/dify-api:0.6.14
- Build a web image (
dify/dify-web
)cd web && docker build . -t dify/dify-web:0.6.14
Exporting and Importing Images (for offline environments)
After building or pulling an image, you can export it as a .tar
file to be imported for use on servers that do not have access to the external network.
- Exporting Mirrors
utilizationdocker save
command to package one or more images.# 导出 Dify 核心镜像 docker save -o dify_dify_api_0.6.14.tar dify/dify-api:0.6.14 docker save -o dify_dify_web_0.6.14.tar dify/dify-web:0.6.14 # 导出其它依赖镜像 docker save -o postgres_15_alpine.tar postgres:15-alpine docker save -o redis_6_alpine.tar redis:6-alpine
- Importing Images
On the new server, use thedocker load
command from.tar
file to load the image.docker load -i dify_dify_api_0.6.14.tar docker load -i dify_dify_web_0.6.14.tar docker load -i postgres_15_alpine.tar docker load -i redis_6_alpine.tar
Core Concept Analysis
Understanding the following key concepts will help to better maintain and utilize the Dify
The
docker
together with docker-legacy
catalogs
exist Dify
Two deployment directories exist in the source code.docker-legacy
is the old deployment method, and the docker
The catalog is the current recommended and more structured deployment option. New users should always choose to use the docker
Catalog.
SECRET_KEY
role of
exist dify-api
serviced .env
configuration file.SECRET_KEY
is a critical security configuration. It is a long random string that is used to encrypt and issue a user's session cookie
that prevents the session from being tampered with. Be sure to set this to a complex value that no one can guess.
Ignored at build time storage
catalogs
Dify
(used form a nominal expression) Dockerfile
When the image is built, it is passed through the .dockerignore
Documentation explicitly ignored storage
Catalog. This is because storage
directory is used to store tenant uploaded files, key pairs, and other sensitive or private information, which should not be packaged into a common Docker
in the image and should instead be dynamically mounted at runtime via a data volume.
analyze docker run --rm
Backup command
The recommended backup command template in the Data Migration section has the following meanings for each parameter:
docker run --rm -v <volume_to_backup>:/source -v <host_backup_dir>:/backup <image> sh -c "<commands>"
assemblies | Meaning and use |
---|---|
docker run |
Start a new container. |
--rm |
Containers are automatically deleted after execution, which is ideal for performing one-time tasks and avoiding leaving behind useless temporary containers. |
-v :/source |
Backup the Docker The data volume is mounted to the container within the /source Catalog. |
-v :/backup |
Mount the directory on the host machine that is used to store the backup files into the container's /backup Catalog. |
“ | Specify a lightweight base image, such as the alpine It has a built-in tar and other commonly used tools. |
sh -c "..." |
executed inside the container shell command. For example cd /source && tar czf /backup/backup.tar.gz . Indicates going into the source data directory and then packing all its contents into the backup directory. |