Vexa, as an open source project, supports local deployment and is suitable for enterprises or individual users with technical skills. Below are the detailed deployment steps and hardware requirements:
Deployment process
- Cloning Warehouse:
git clone https://github.com/Vexa-ai/vexa.git cd vexa
- Initialize the submodule:
make submodules
- Configure environment variables:
make env
Edit the .env file to set parameters (e.g. ADMIN_API_TOKEN, etc.).
- Download the Whisper model:
make download-model
- Build the Docker image:
docker build -t vexa-bot:latest -f services/vexa-bot/core/Dockerfile ./services/vexa-bot/core
- Start the service:
docker compose build docker compose up -d
hardware requirement
- Recommended Configurations: Servers with NVIDIA GPUs
- minimum requirement: 16GB RAM + 4-core CPU
- storage space: Sufficient space needs to be reserved for storing Whisper models (stored by default in the . /hub directory)
Once the service is started, the API gateway is running at http://localhost:8056 and the management interface is located at http://localhost:8057. You need to execute git pull and docker compose up -build periodically to get the latest features.
This answer comes from the articleVexa: a real-time meeting transcription and intelligent knowledge extraction toolThe































