Deployment via Docker is the recommended solution, which is accomplished in three steps:
- Preparation of the basic environment::
- Installing Docker Desktop and Docker Compose
- pass (a bill or inspection etc)
git cloneGet the latest code base - Configure the authentication key and database parameters in the .env file
- Ollama Integration::
- Modify docker-compose.yml to add the
extra_hostsmap (math.) - set up
OLLAMA_HOST=http://host.docker.internal:11434 - Pull the desired model through the script:
./scripts/ollama_docker.sh pull llama3
- Modify docker-compose.yml to add the
- Starting services::
- GPU accelerated mode:
docker compose up --profile local-gpu -d --build - Pure CPU mode:
docker compose up --profile local-cpu -d --build - interviews
http://localhost:3000Platform
- GPU accelerated mode:
Note: 16GB or more RAM is recommended for local LLM operation, and NVIDIA GPUs can significantly improve performance. Logs can be accessed via thedocker compose logsLive view.
This answer comes from the articleSim Studio: open source workflow builder for AI agentsThe































