Rapid Deployment Guide for Disaster Recovery Environments
Two emergency deployment scenarios are offered to meet the knowledge management needs during emergencies:
- Lightweight deployment (30-minute programme)::
- Use pre-built Docker images:
docker pull notebookllama/mini - Start only the core services:
- PostgreSQL container (-core-db parameter)
- Text processing microservices (disable audio/visualization module)
- Configure Minimum API Key Group (LlamaCloud Basic Edition only)
- Use pre-built Docker images:
- Offline deployment program::
- Download a snapshot of the model ahead of time (about 8.7GB):
wget model.mirror/notebookllama.snapshot - Use native LLM instead of cloud APIs:
- Modify config.py to set the LOCAL_LLM_PATH parameter
- Recommended to be paired with a native inference framework such as llama.cpp
- Download a snapshot of the model ahead of time (about 8.7GB):
Practical example: Médecins Sans Frontières has used this program to set up a medical documentation response system in field hospitals within 72 hours.
This answer comes from the articleNotebookLlama: open source document knowledge management and audio generation toolThe































