Overseas access: www.kdjingpai.com
Bookmark Us

Story2Board is a no-training artificial intelligence framework that automatically converts a written story in natural language into a set of coherent and expressive visual storyboards. Story2Board solves the problem that traditional AI drawing tools often struggle to maintain the consistency of character images and scene styles when generating successive images, by ensuring that the protagonist not only maintains the same appearance in different images, but also takes into account the composition of the image, the background changes, and the rhythm of the narrative, to generate visual stories with a strong sense of cinematography. The tool uses a technique called "Latent Panel Anchoring" to target the character's features, and "Mutual Attention Value Blending" to blend visual elements across different frames, which significantly improves the storytelling and narrative effect without modifying the underlying AI model. This significantly improves the coherence and narrative effect of the storyboard without modifying the underlying AI model. For filmmakers, screenwriters and content creators, this is a practical tool for quickly visualizing textual ideas.

Function List

  • Text to Score Script Conversion: The natural language story entered by the user is automatically parsed by a large-scale language model (LLM) into specific prompts (Prompts) suitable for each frame.
  • Role consistency maintainedLatent Panel Anchoring is used to ensure that the same character in a story maintains a consistent appearance and identity across all successive frames.
  • Increased scene coherenceThe RAVM (Reciprocal Attention Value Mixing) technology gently blends the visual characteristics of the different images, resulting in more natural scene transitions and more coherent storytelling.
  • No model training requiredAs a "train-as-you-go" framework, users do not need to re-train or fine-tune any AI model, and can directly build on existing state-of-the-art Vincennes models such as FLUX.1-dev) used on it.
  • Flexible scene description: Supporting independent descriptions of the reference frame and subsequent frames, the user has precise control over the content of each frame of the split-screen, including the character's movements, expressions, and background environment.
  • Reproducible results: The generated images and the specific cue words used to generate them are saved together in the output directory, making it easy for the user to access and reproduce the results.

Using Help

Story2Board is a command line tool that generates a series of split-screen images by entering text describing a story. Below is the detailed installation and usage procedure.

environmental preparation

Before you can use it, you need to configure your runtime environment. The official recommendation is to use Conda to create a separate Python environment to avoid conflicts with dependent libraries from other projects.

  1. Installing Conda
    If you don't already have Conda installed, you can head over to the official Anaconda website to download and install it.
  2. Cloning Project Warehouse
    Open your terminal and use the git command clones Story2Board's code locally.

    git clone https://github.com/DavidDinkevich/Story2Board.git
    
  3. Go to the project directory
    cd Story2Board
    
  4. Create and activate a Conda environment
    Use the following command to create a file named story2board and specify Python version 3.12.

    conda create -n story2board python=3.12
    

    After the environment has been created successfully, activate the environment:

    conda activate story2board
    
  5. Installation of dependent libraries
    Dependency libraries required by the project are documented in the requirements.txt file. Use the pip command to install it.

    pip install -r requirements.txt
    

    draw attention to sth.: If you have an NVIDIA graphics card and want to use CUDA for acceleration, it is recommended that you follow the instructions on the PyTorch website and install a version of PyTorch that supports your graphics card's driver, then run the above pip install command. This ensures that PyTorch's CUDA version matches correctly.

How to use

At the heart of Story2Board is a program called main.py Python script. You'll need to run it from the command line with some necessary arguments describing the story you want to generate.

Description of core parameters

  • --subject: Designate the main character of the story. This description is critical because it will be used to ensure that the character is consistent across all subplots. For example, "a smiling boy" or "a fox with shiny fur and beady eyes".
  • --ref_panel_prompt: A description of the reference screen. This is the starting screen for the story and the reference point for characterization in all subsequent screens. The description needs to include the scene and the character's actions.
  • --panel_prompts: A description of other subsequent split screens. You can provide one or more descriptions, each of which corresponds to a new sub-scene. In these descriptions, you don't need to mention the detailed characteristics of the protagonist repeatedly, just describe his new actions and new scenes.
  • --output_dir: Specifies the path where the generated images and logs are saved.

Procedure for use

  1. Conceptualize your story
    First, think of a simple story and identify your main character's image. Break the story down into a few key images.
  2. Writing a command line
    Open the terminal and make sure you have activated story2board environment. Then write the command in the following format:

    python main.py --subject "你的主角描述" \
    --ref_panel_prompt "参考画面的描述" \
    --panel_prompts "第1个后续画面的描述" "第2个后续画面的描述" "第3个后续画面的描述" \
    --output_dir "保存结果的文件夹路径"
    

concrete example

Let's take a look at a concrete example of an officially provided story featuring a magical fox.

protagonists::fox with shimmering fur and glowing eyes (A fox with shiny fur and sparkling eyes)

Storyboards:

  1. reference frame: The fox walked into a twilight forest and stepped onto a mossy stone path.
  2. Screen 2:: The fox jumped over a fallen tree with a clouded canyon below.
  3. Screen 3:: The fox is perched on a broken archway of ancient stone with vines and silver moss hanging around it, with Twilight in the background.
  4. Screen 4:: Fox watches a meteor shower from the edge of a glowing lake that perfectly reflects the stars.

Based on these screens, you can write the following command:

python main.py \
--subject "fox with shimmering fur and glowing eyes" \
--ref_panel_prompt "stepping onto a mossy stone path under twilight trees" \
--panel_prompts "bounding across a fallen tree over a mist-covered ravine glowing faintly with constellations" "perched atop a broken archway of ancient stone, vines and silver moss hanging down, the twilight sky glowing behind him" "watching a meteor shower from the edge of a luminous lake that reflects the stars perfectly" \
--output_dir outputs/magical_fox_story
  1. View Results
    After running the command, the program will automatically download the required AI models and start generating images. This process may take some time depending on your hardware performance.
    Once you've done that, you can add the outputs/magical_fox_story The generated split-screen images are found in the folder. Among them, the first one is a reference image, while the subsequent images continue the image of the main character, but show different scenes and actions. Also saved in the folder is a log of the detailed cue words used to generate each image, making it easy for you to analyze and reproduce.

With this process, you can use Story2Board to quickly visualize any written story and create a coherent and expressive split-screen script.

application scenario

  1. Film and animation pre-production
    Directors and screenwriters can use Story2Board to quickly convert key scenes in their scripts into visual subplots. This helps teams standardize their understanding of the composition, atmosphere and character movement of the scene upfront, greatly saving the time and cost of traditional hand-drawn subplots.
  2. Advertising and marketing content creation
    Advertising creatives can use this tool to quickly generate a series of visual images from an advertising script or marketing story for internal proposals or client communications, to more visually demonstrate the creative effect.
  3. Novel & Game Concept Design
    Novel authors or game designers can input a textual description of the storyline to generate concept art of characters or illustrations of key scenes to help readers or development teams better visualize the world in the story.
  4. Education & Presentation
    Teachers or speakers can visualize complex narrative content or historical stories by means of vivid scoping scripts to make teaching or presentation content more vivid and easy to understand.

QA

  1. What AI model does Story2Board use?
    It is itself a no-training framework that can be used with advanced Text-to-Image models. According to its official documentation, it currently uses the following base model by default FLUX.1-devThe
  2. Is there an additional cost to use this tool?
    The Story2Board project itself is open source and free. However, it relies on a powerful Vincennes graphical model, which requires high-performance computer hardware (especially graphics cards and memory) when running locally. If it is run on a cloud-based platform, it may incur corresponding computational resource costs.
  3. Is the character consistency of generated images always guaranteed to be 100%?
    The tool greatly improves character consistency through techniques such as "latent panel anchoring", which is far more effective than ordinary text-based mapping tools. However, in very complex or drastically changing scenarios, minor inconsistencies may still occur occasionally. Providing a clear, concrete --subject Descriptions are key to ensuring consistency.
  4. Do I need programming knowledge to use it?
    You will need some basic knowledge of command line operations to get it up and running. The process includes cloning the code repository, installing dependencies and executing Python scripts. But you don't need to understand the code or algorithms behind it, just follow the steps in the Help.
  5. About how long does it take to generate a split image?
    The generation time depends on your hardware configuration (mainly GPU performance), image resolution and story complexity. On a well-configured consumer graphics card, generating a split script containing 4-5 images may take a few minutes.
0Bookmarked
0kudos

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top

en_USEnglish