The complete workflow for generating a video is divided into five steps:
- Preparation of instruction documents: in
./game_demo/instructions.txt
Input explicit instructions (character + action + scene elements) in - Running MLLM reasoning: Implementation
python inference_MLLM.py --instruction "具体指令"
Generate an action representation - video decoding: By
python inference_Decoder.py
Converting intermediate representations to video files - View Output: The generated results are saved in the
./outputs
directory - Status Update: Role status changes are synchronized and recorded in the
state.json
Papers
Special Tip:
1. The more detailed the command, the more precise the generated effect (e.g. "Sousuke drives a purple antique car on the beach at dusk").
2. Character interaction instructions need to specify the relationship between the two parties (e.g., "Kiki patiently teaches Pazuzu to control the broom").
This answer comes from the articleAnimeGamer: An Open Source Tool for Generating Anime Videos and Character Interactions with Language CommandsThe