Presenton takes two main approaches to protecting user data privacy:
- Local runtime support: By integrating the Ollama open source model, users can run AI-generated tasks in a completely offline environment, avoiding the uploading of sensitive data to the cloud. Startup Settings
LLM=ollama
parameter to enable this mode. - key protection mechanism: Deployment is accomplished through the
CAN_CHANGE_KEYS=false
Environment variables lock API keys to prevent unauthorized modifications. Also supports local storage path mapping (-v ./app_data:/app_data
), all generated files are saved only in a user-specified local directory.
This design is particularly suitable for handling sensitive content such as corporate financial reports and medical data, compared to competing products that require cloud-based processing. Users also have the ability to docker logs
command to monitor data processing logs in real time.
This answer comes from the articlePresenton: open source AI automatic presentation generation toolThe