Deployment is divided into four key steps:
- environmental preparationInstall Python 3.8+, Git and system dependencies (e.g. poppler-utils for Linux), and project dependencies (boto3/pandas/selenium etc.) via pip.
- Model Service Configuration: It is recommended to use vLLM to deploy open source models. For example, to start the Qwen3-8B-CK-Pro service, you need to specify parameters such as model path, parallelism, etc. Port 8080 is open by default.
- Browser service startup: by running
run_local.sh
The script initializes the Playwright Browser Service, which listens on port 3001 by default. - operate: Define the input task file using JSON Lines format, call the model and browser services through the main program, and the output will be saved to the specified file.
Note: Windows users need to deal with additional dependencies, it is recommended to prioritize the use of Linux/macOS systems, and production environments must be configured with sandbox isolation.
This answer comes from the articleCognitive Kernel-Pro: a framework for building open source deep research intelligencesThe