The following steps are required to configure a local Ollama model:
- Ensure that the Ollama service is started (default port 11434)
- Creating Configuration Files
~/.crush/config.json
, add the following:{
"providers": {
"ollama": {
"type": "openai",
"base_url": "http://127.0.0.1:11434/v1",
"api_key": "ollama"
}
}
} - (of a computer) run
crush model switch ollama
Switching Models - transferring entity
crush code
Test Code Generation Function
This configuration allows Crush to run completely offline for privacy-sensitive scenarios.
This answer comes from the articleCrush: endpoint AI programming assistant with integrated LSP and multi-model switchingThe