AI Toolkit's layer-specific training feature allows users to target and optimize parts of the model's structure, as follows:
- Edit Configuration File: in
networkPartially addedonly_if_containsparameters, for example:network: type: "lora" linear: 128 linear_alpha: 128 network_kwargs: only_if_contains: - "transformer.single_transformer_blocks.7.proj_out" - "transformer.single_transformer_blocks.20.proj_out" - Selecting the target layer: the layer names need to be known precisely, usually from the model architecture documentation, in the example the 7th and 20th projection layers of the transformer module have been chosen
- priming training: Run with the modified configuration file
python run.py config/my_config.ymlThe tool will only update the weights of the specified layers.
This function is particularly suitable for the following scenarios:
- Fix underperformance of certain layers of the model
- Perform comparative experiments to analyze the effect of different layers on output
- Prioritize optimization of critical components with limited resources
Note: Excessive layer-specific training may lead to a decrease in the overall coordination of the model, and it is recommended to monitor the effect in conjunction with a validation set.
This answer comes from the articleAI Toolkit by Ostris: Stable Diffusion with FLUX.1 Model Training ToolkitThe































