The installation and deployment process is divided into three main steps:
- environmental preparation: First clone the official repository via git
git clone https://github.com/Wan-Video/Wan2.2.gitThen install the dependenciespip install -r requirements.txtThe
special attention: Requires PyTorch version ≥ 2.4.0 and the flash_attn package may need to be installed separately. - Model Download: Two official downloads are available:
- Hugging Face way:
huggingface-cli download Wan-AI/Wan2.2-S2V-14B - ModelScope approach:
modelscope download Wan-AI/Wan2.2-S2V-14B
The downloaded model files should be stored in the specified directory (default is . /Wan2.2-S2V-14B)
- Hugging Face way:
- operational verification (OV): After completing the installation, you can test whether the model works properly by using the base generation command. If there is a problem of insufficient video memory, you can try to use the
--offload_model Trueparameter offloads some of the model components to the CPU.
For professional users, there is also an official support solution for multi-GPU distributed inference, which requires the torchrun command with the FSDP (Full Split Data Parallelism) parameter.
This answer comes from the articleWan2.2-S2V-14B: Video Generation Model for Speech-Driven Character Mouth SynchronizationThe































