Overseas access: www.kdjingpai.com
Bookmark Us
Current Position:fig. beginning " AI Answers

How do I deploy Sim Studio on my local machine and connect to my local LLM?

2025-08-23 2.1 K

Deployment via Docker is the recommended solution, which is accomplished in three steps:

  1. Preparation of the basic environment::
    • Installing Docker Desktop and Docker Compose
    • pass (a bill or inspection etc)git cloneGet the latest code base
    • Configure the authentication key and database parameters in the .env file
  2. Ollama Integration::
    • Modify docker-compose.yml to add theextra_hostsmap (math.)
    • set upOLLAMA_HOST=http://host.docker.internal:11434
    • Pull the desired model through the script:./scripts/ollama_docker.sh pull llama3
  3. Starting services::
    • GPU accelerated mode:docker compose up --profile local-gpu -d --build
    • Pure CPU mode:docker compose up --profile local-cpu -d --build
    • interviewshttp://localhost:3000Platform

Note: 16GB or more RAM is recommended for local LLM operation, and NVIDIA GPUs can significantly improve performance. Logs can be accessed via thedocker compose logsLive view.

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top