Deployment is divided into four key steps:
- environmental preparationInstall Python 3.8+ and Git, Linux/macOS need to install system dependencies such as poppler-utils via apt-get, Windows needs additional configuration.
- Model Service Configuration: Deploy open source models using vLLM or TGI, e.g. with the command
python -m vllm.entrypoints.openai.api_server
Start the Qwen3-8B-CK-Pro modeling service. - Browser service startup: Run
run_local.sh
The script starts the Playwright Browser service on port 3001 by default. - main program execution: Specify the API endpoints of the model and browser services via a JSON Lines file input task, e.g.
--updates "{'web_agent': {'model': {'call_target': 'http://localhost:8080/v1/chat/completions'}}}"
The
The project documentation specifically emphasizes security configuration, and suggests that it be configured via the deluser ${USER} sudo
Disable elevation of privilege.
This answer comes from the articleCognitive Kernel-Pro: a framework for building open source deep research intelligencesThe