Overseas access: www.kdjingpai.com
Bookmark Us
Current Position:fig. beginning " AI Answers

How to deploy Tabby services locally via Docker?

2025-08-25 1.4 K

Local deployment of Tabby requires the followingstandard process(based on the latest v0.24.0 release):

  1. environmental preparation::
    • Installing Docker version 20.10+
    • NVIDIA drivers and CUDA Toolkit recommended (version 11.8/12.x recommended to enable GPU acceleration)
    • Ensure disk space ≥ 10GB
  2. Mirror Image Acquisition: Implementationdocker pull tabbyml/tabbyPull the latest mirrors
  3. service activation: Copy the following commands and adjust the parameters according to the hardware:
    docker run -it --gpus all -p 8080:8080 -v $HOME/.tabby:/data tabbyml/tabby serve --model TabbyML/StarCoder-1B --device cuda --chat-model Qwen2-1.5B-Instruct
    • removes--gpus allCPU-only operation possible
    • First startup takes 5-10 minutes to download the model
  4. Verification runs: Accesshttp://localhost:8080View Welcome Page

Key caveat: data is persistently stored in the~/.tabbyCatalog; available through--parallelism 4Enhance concurrency performance; if deployed in enterprise servers need to configure the reverse proxy.

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top

en_USEnglish