Installation of Lumina-mGPT-2.0 requires the following steps to set up a professional environment:
- Get Code: First clone the project repository via git
git clone https://github.com/Alpha-VLLM/Lumina-mGPT-2.0.git
and enter the project directory - Creating a Virtual Environment: Using Conda to set up a standalone environment for Python 3.10 to avoid dependency conflicts
conda create -n lumina_mgpt_2 python=3.10 -y
- Installation of dependencies: Perform the following in sequence
pip install -r requirements.txt
Installation of basic dependencies, in particular the acceleration module Flash Attention - Download model weights: The MoVQGAN pre-training weights file of size 270M needs to be downloaded separately and placed in the specified directory
- Verify Installation: Run the test generation script to confirm that the environment is configured correctly, note that you need more than 40GB of video memory GPU support!
Important: During the installation process, pay special attention to the compatibility between CUDA version and torch, we recommend using the official tested CUDA 12+ torch 2.3 environment. In case of dependency conflict, you can try to use--no-deps
Parameters install key components individually.
This answer comes from the articleLumina-mGPT-2.0: an autoregressive image generation model for handling multiple image generation tasksThe