Overseas access: www.kdjingpai.com
Bookmark Us
Current Position:fig. beginning " AI Answers

How to deploy Cognitive Kernel-Pro locally?

2025-08-14 113

Deployment is divided into four key steps:

  1. environmental preparationInstall Python 3.8+ and Git, Linux/macOS need to install system dependencies such as poppler-utils via apt-get, Windows needs additional configuration.
  2. Model Service Configuration: Deploy open source models using vLLM or TGI, e.g. with the command python -m vllm.entrypoints.openai.api_server Start the Qwen3-8B-CK-Pro modeling service.
  3. Browser service startup: Run run_local.sh The script starts the Playwright Browser service on port 3001 by default.
  4. main program execution: Specify the API endpoints of the model and browser services via a JSON Lines file input task, e.g. --updates "{'web_agent': {'model': {'call_target': 'http://localhost:8080/v1/chat/completions'}}}"The

The project documentation specifically emphasizes security configuration, and suggests that it be configured via the deluser ${USER} sudo Disable elevation of privilege.

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top

en_USEnglish