Overseas access: www.kdjingpai.com
Bookmark Us

Ai-movie-clip is an open source intelligent video editing system that utilizes artificial intelligence technology to automate the video editing process. This system is able to deeply analyze the video frame and content, and according to the user's specific requirements, automatically complete a series of post-production work such as editing, adding special effects and transitions animation. The project is aimed at developers and content creators who need to batch process videos or want to integrate AI editing capabilities into their existing workflows. It integrates AI modeling capabilities from AliCloud DashScope and OpenAI, and provides services through a flexible API interface. Users can either operate it directly from the command line or deploy it as a web service for easy invocation in different application scenarios, thus significantly improving the efficiency of video production.

Function List

  • Automatic video analysis: Analyze video content using computer vision (CV) and machine learning (ML) models to identify key frames and subjects.
  • Versatile templates: A variety of built-in video style templates for different scenarios such as social media, commercials, education, etc.
  • AI content generation: Integrated text generation, image generation and speech synthesis (text-to-speech) functions, which can automatically generate voice-overs and text descriptions for videos.
  • Special effects and transitions: Provide a rich library of video effects and transition animations to make the edited video effect more professional.
  • API Services: Provides an interface based on the FastAPI framework to support developers for secondary development and batch processing tasks.
  • MCP Integration: Support for Model Context Protocol (Model Context Protocol), allowing developers more flexibility to extend and integrate different AI models.
  • Highlight Clips: The ability to automatically recognize and edit out highlights in a video based on viewing data (e.g., an Excel file).

Using Help

Ai-movie-clip is a powerful AI video clip framework, in order to use it, you need to complete some environment preparation and configuration work, and then call its functions via command line or API.

Step 1: Environmental requirements

Before starting the installation, make sure your system meets the following basic requirements:

  • Python: The version needs to be 3.8 or more.
  • FFmpeg: This is a basic tool for dealing with audio and video, and the system needs to call it to perform operations such as decoding, encoding and splicing video. You must install it in your operating system and make sure that its path has been added to the system environment variables so that the program can call it from any path. ffmpeg Command.
  • CUDA: If your computer is equipped with an NVIDIA graphics card, it is highly recommended to install the CUDA toolkit. This will allow the program to utilize the GPU for computational acceleration, greatly increasing the speed of video analysis and processing. This is optional, if you do not have a GPU, the program will run on the CPU by default.

Step 2: Installation and Configuration

  1. Cloning Project Code
    Open a terminal or command line tool and use thegitcommand to clone the project code from GitHub to your local computer.

    git clone https://github.com/LumingMelody/Ai-movie-clip.git
    

    Then go to the project directory:

    cd Ai-movie-clip
    
  2. Installation of dependent libraries
    All of the project's Python dependency libraries are documented in the requirements.txt file. Use thepipAll necessary libraries can be installed with one click.

    pip install -r requirements.txt
    
  3. Configuring Environment Variables
    This is the most critical step. The project needs to call external AI services and cloud storage, so you must provide the relevant API keys and access credentials.
    First, copy the environment variable template file .env.example and rename it .envThe

    cp .env.example .env
    

    Next, use a text editor to open this newly created .env file, you will see the following, which you need to fill in manually:

    # AI 模型 API 密钥
    DASHSCOPE_API_KEY=your_dashscope_api_key_here
    OPENAI_API_KEY=your_openai_api_key_here
    # 阿里云对象存储 (OSS) 配置
    OSS_ACCESS_KEY_ID=your_oss_access_key_id_here
    OSS_ACCESS_KEY_SECRET=your_oss_access_key_secret_here
    OSS_BUCKET_NAME=your_bucket_name_here
    

    How do I get these keys?

    • DASHSCOPE_API_KEYThis key comes from AliCloud's DashScope service. You need to visit the AliCloud website, open the DashScope service, and create an API-KEY in the console.This service is mainly used to drive core AI functions such as video analytics and content generation.
    • OPENAI_API_KEY: This key comes from the OpenAI platform and is mainly used for language modeling related functions such as text generation. You will need an OpenAI account and to create the API Key.
    • OSS Configuration: The project uses AliCloud Object Storage (OSS) to store the video clips during processing and the final generated files. You need to enable the AliCloud OSS service, create a storage bucket (Bucket), and then get the access ID of that storage bucket (OSS_ACCESS_KEY_ID) and the key (OSS_ACCESS_KEY_SECRET), and fill in the name of the storage bucket in theOSS_BUCKET_NAMEThe

Step 3: Learn how to use

Ai-movie-clip provides two main ways to use it: command line tools and Web API services.

1. Command-line tools (for quick tests and local tasks)
The program provides a main.py Scripts that allow you to invoke functions directly from the command line.

  • Analyze the video: Let AI analyze a video file and output the results in JSON format.
    python main.py analyze video.mp4 --output analysis.json
    
  • Automatic video editing: Automatically clip a video based on a specified duration and style template.
    python main.py edit video.mp4 --duration 30 --style "抖音风"
    
  • View all available commands: You can get a copy of your program by --help parameter to see all supported commands and options.
    python main.py --help
    

2. Web API services (for integration and production environments)
If you want to integrate the AI clips feature into your own website or app, you can start the project's built-in API service.

  • Starting the API server: Running app.py file to start a FastAPI-based Web server. To facilitate development and debugging, it is recommended to use the uvicorn command to start it so that the server can restart automatically after the code is modified.
    uvicorn app:app --reload
    
  • Access to API Documentation: After the server has started, open it in the browser http://localhost:8000/docs. You'll be presented with an interactive API documentation page (generated by the Swagger UI) that details all available API interfaces, parameters, and return formats. You can even test the interface directly from this page.
  • Example of calling the API: You can use any programming language to call these APIs. here's one using Python requests library to call the API example.
    import requests
    # 假设服务器正在本地运行
    base_url = "http://localhost:8000"
    # 示例1:分析视频
    # 需要上传一个本地视频文件
    with open("video.mp4", "rb") as f:
    response = requests.post(
    f"{base_url}/analyze",
    files={"file": f},
    data={"duration": 30}
    )
    print("分析结果:", response.json())
    # 示例2:生成编辑视频
    # 提交一个JSON请求,指定视频路径和编辑参数
    edit_payload = {
    "video_path": "path/to/video.mp4", # 注意这里是服务器可访问的文件路径
    "template": "douyin",
    "duration": 30
    }
    response = requests.post(f"{base_url}/edit", json=edit_payload)
    print("剪辑任务状态:", response.json())
    

application scenario

  1. Social Media Content Automation
    For social media operation teams that need to publish a large number of short videos daily, they can use Ai-movie-clip to automatically edit long live broadcast footage, event recordings, or product introduction videos into short videos that match the style of platforms such as Jittery, Shutterbug, etc., with automatic subtitles and background music, which greatly reduces the content production cycle.
  2. Batch first cut of video clips
    Professional video editors in the face of massive raw material, you can use this tool for the first round of rough cuts.AI can quickly filter out the content of the full, stable footage of the clip, or according to the preset script requirements to generate a basic version of the clip, the editor and then on the basis of the refinement of the adjustments and creative processing, thus saving a lot of repetitive labor.
  3. Developer-integrated video processing capabilities
    For developers who wish to provide video processing capabilities in their own applications (e.g., online education platforms, marketing tools, or cloud photo album services), they can call Ai-movie-clip's services directly through the API. Developers don't need to care about the underlying complex video processing and AI modeling details, they just need to send the video file path and editing requirements to the API to get the final video product.

QA

  1. How to handle very large video files?
    The system is internally designed with an automatic segmentation processing mechanism. When processing large video files, the program will first cut them into smaller segments for analysis and processing, and then finally merge the results. You can set the configuration file config.yaml in which the size of the slice is adjusted to balance processing speed and memory consumption.
  2. What video formats does the system support?
    The underlying system relies on FFmpeg for video encoding and decoding, so it theoretically supports all common video formats supported by FFmpeg, such as MP4,AVI,MOV,MKV etc.
  3. What can I do to make video processing faster?
    The most efficient way is to use GPU acceleration. If your machine is equipped with an NVIDIA graphics card and the CUDA environment is properly configured, the system will automatically utilize the GPU for computationally intensive tasks. Alternatively, you can use GPU acceleration in the config.yaml Configuration file to adjust the number of threads or processes for concurrent processing to better utilize multi-core CPU resources.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top

en_USEnglish