Fast Wan is an online tool platform that utilizes artificial intelligence technology to generate videos. At the core of the platform is a series of AI models developed by Alibaba called "Tongyi Qianqian" (Wan), specifically Wan 2.1 and Wan 2.2. These models are open source, meaning they are free for anyone to use, including for commercial purposes. The Fast Wan website provides a relatively simple interface for users to work directly with these AI models. Users can enter a text description or upload an image and let the AI create a new video from it. Wan 2.2 is a new generation of models that has been technically upgraded to use a "Mix of Experts" (MoE) architecture, which better handles the details of the video and the overall picture, resulting in higher-quality video content. This model generates video with better control over the cinematic feel of the image, handles complex object movement, and more accurately understands the user's textual requirements. The goal of the platform is to make video creation easier and more efficient, even when running with an average consumer-grade computer graphics card.
Function List
- Text Generation Video: Enter a descriptive text and the AI model will automatically generate a video based on the text content.
- Image Generation Video: Upload an image as the initial screen and combine it with a text description, and AI will use it as the basis for generating a dynamic video that is able to maintain the subject and style of the original image.
- Dual Model Selection: The platform provides both Wan 2.2 and Wan 2.1 models. Users can choose according to their needs, of which Wan 2.2 is the latest version, which is better in terms of picture quality, motion processing and detail performance.
- Movie Style Control: The Wan 2.2 model allows users to have finer control over the visual style of the video, such as adjusting lighting, color, and composition to achieve a cinematic-like picture effect.
- High quality video output: Support for generating videos up to 720P or 1080P resolution with high picture clarity.
- Rapid processing: The website is optimized to process user requests quickly and reduce the waiting time required to generate videos.
- Creative ControlProvide a variety of adjustable parameters, allowing users to have more customization space in the video generation process.
Using Help
Fast Wan is an AI video generation tool with a very straightforward process. Users don't need to install any software and can start using it by visiting its official website through a browser. The whole process is divided into three main steps: selecting a model, entering creation commands and generating a video.
Step 1: Visit the website and select a model
First, open the URL in your browser https://fast-wan.com/
Click on the "Start Creating with Fast Wan" button to enter the creation interface. When you enter the homepage of the website, you will see an eye-catching "Start Creating with Fast Wan" button, click it to enter the creation interface.
At the heart of the creation interface, you can choose the AI model to use. The platform offers two main options:
Wan 2.2
: This is the latest and most powerful model and is recommended to be preferred. It generates videos that are superior to older versions in terms of picture smoothness, detail presentation and understanding of textual instructions.Wan 2.1
: This is a classic alternate model that may be slightly faster to generate, but is not as effective as version 2.2 in terms of results.
For those seeking high quality video, the direct optionWan 2.2
Ready to go.
Step 2: Select the generation mode and enter the command
After selecting the model, you need to determine how the video will be generated. The platform supports two main modes of creation:
Mode 1: Text-to-Video
This is the most basic and commonly used feature. You will see a text input box, usually labeled "Prompt" or something similar. Here you need to describe in detail and in words what you want the video to show. In order for the AI to accurately understand your intent, follow these tips when describing:
- concretizeAvoid vague words. For example, instead of just writing "a car", write "a red sports car speeding down a rainy city street, neon lights reflecting off the slippery ground".
- Describe the dynamics: The video is dynamic, so please clearly describe the movement in the frame. For example, "An astronaut walks on the surface of the moon, slowly raising dust from his feet."
- Specify Style: You can add style words to the description. For example, "movie texture, close-up, anime style, black and white film", etc.
- Composition and Lighting: You can try to describe the angle and light of the shot. For example, "A low angle elevated shot of a magnificent Gothic church with the setting sun's rays filtering through its stained glass windows."
Mode 2: Image-to-Video
If you wish to create a video based on an existing picture, you can choose this mode. There will be a button to upload an image on the interface.
- Upload a picture: Click on the upload button and select an image from your computer. This image will be used as the starting frame or core reference for the video.
- Entering auxiliary text: As with text to video, you still need a text input box to describe how you want the image to "move". The text here is mainly used to direct how the elements in the image will move. For example, if you upload a picture of a calm lake, you can describe in the text: "The lake begins to ripple, the breeze blows, and the leaves gently sway." The AI will try to keep the original style and subject matter of the picture, and add dynamic effects on top of it.
Step 3: Adjust parameters and generate
After entering commands, the right side or bottom of the interface usually offers some advanced parameter settings that give you finer control over the generated video. Common parameters include:
- Video Length: Set how many seconds you want the generated video to be.
- Aspect Ratio: Select whether the video is landscape (16:9), portrait (9:16) or square (1:1).
- Motion: You can adjust the overall motion intensity of the screen, the higher the value, the more pronounced the dynamic effect.
- Random Seed: A numerical value used to determine randomized results. If you are satisfied with a certain result, you can write down this seed number to generate future videos with a similar style.
Once all the settings are complete, click the "Generate" button. The server will start processing your request. Depending on the complexity of the video and the current load on the server, the waiting time can vary from a few seconds to a few minutes. After the video is generated, you can preview it directly on the web page, and if you are satisfied with it, you can download and save it locally.
application scenario
- Social Media Content Creation
Quickly generate eye-catching short videos for platforms such as Jitterbug, Shutterbug, Instagram and more. Users just need to input a creative idea or upload an image to produce dynamic visual content without complicated filming and editing. - Advertising and marketing materials
Small and medium-sized enterprises or individual businesses can produce product promotional videos or commercials at low cost. For example, type in "a sneaker traveling through a futuristic tunnel surrounded by light" to generate a technological product demonstration video. - Art and Creative Expression
Artists, designers and creative enthusiasts can utilize this tool to transform abstract concepts or fantasy images in their minds into dynamic videos. It becomes a new medium for creating art that can be used to make experimental short films or dynamic illustrations. - Education & Presentation
Used to create motion graphics for instructional videos. For example, a simplified simulation of the process of cell division or the rotation of the planets around the sun can be generated to visualize complex concepts.
QA
- Is Fast Wan free?
The core technology behind Fast Wan, Wan 2.2 and Wan 2.1 models, is open-sourced by Alibaba, follows the Apache 2.0 license, and is free to use for commercial purposes. The Fast Wan website, as a platform for providing convenient services, may offer free trial credits, but may charge in the future for higher-frequency or higher-quality generation needs. - What is the difference between Wan 2.2 and Wan 2.1 models?
Wan 2.2 is an updated version of Wan 2.1, which utilizes a more advanced "Mix of Experts" (MoE) AI architecture that allows it to generate videos with significantly better detail, complex motion simulation, and aesthetic control than version 2.1. Simply put, it's easier to create high-quality, cinematic videos with Wan 2.2. - Who owns the copyright to the generated video?
As its model is based on an open source license, the commercial use rights of videos created by users through the platform usually belong to the creators themselves. However, it is recommended to read the Terms of Service (TOS) of the Fast Wan website for the most accurate copyright information. - What kind of computer do I need to use Fast Wan?
Fast Wan is an online platform where all video generation calculations are done on cloud servers. Therefore, users do not need a high-performance computer or a professional graphics card, just an ordinary computer or mobile device capable of running modern browsers smoothly and a stable internet connection.