Model generation using WhiteLightning is divided into the following key steps:
- Installing Docker: Ensure that Docker is installed on your system by passing the
docker --version
Verify the installation. - Pulling Mirrors: Implementation
docker pull ghcr.io/inoxoft/whitelightning:latest
Get the latest tool image. - Configuring the API Key: Setting Environment Variables
OPEN_ROUTER_API_KEY
to call large language models to generate data (only needed for the training phase). - Running containers: Initiate model generation via Docker commands, e.g., categorizing customer reviews:
docker run --rm -e OPEN_ROUTER_API_KEY=... -v "$(pwd)":/app/models ghcr.io/inoxoft/whitelightning:latest python -m text_classifier.agent -p "分类任务描述"
The - Verify Output: Generate the ONNX model file after about 10 minutes and check the logs to confirm the training accuracy (e.g.
Accuracy: 1.0000
).
The whole process does not require manual data preparation, Windows users need to pay attention to the path formatting.
This answer comes from the articleWhiteLightning: an open source tool for generating lightweight offline text classification models in one clickThe