Overseas access: www.kdjingpai.com
Bookmark Us
Current Position:fig. beginning " AI Answers

How to deploy Vexa locally and what hardware is required?

2025-08-24 1.6 K

Vexa, as an open source project, supports local deployment and is suitable for enterprises or individual users with technical skills. Below are the detailed deployment steps and hardware requirements:

Deployment process

  1. Cloning Warehouse:
    git clone https://github.com/Vexa-ai/vexa.git
    cd vexa
  2. Initialize the submodule:
    make submodules
  3. Configure environment variables:
    make env

    Edit the .env file to set parameters (e.g. ADMIN_API_TOKEN, etc.).

  4. Download the Whisper model:
    make download-model
  5. Build the Docker image:
    docker build -t vexa-bot:latest -f services/vexa-bot/core/Dockerfile ./services/vexa-bot/core
  6. Start the service:
    docker compose build
    docker compose up -d

hardware requirement

  • Recommended Configurations: Servers with NVIDIA GPUs
  • minimum requirement: 16GB RAM + 4-core CPU
  • storage space: Sufficient space needs to be reserved for storing Whisper models (stored by default in the . /hub directory)

Once the service is started, the API gateway is running at http://localhost:8056 and the management interface is located at http://localhost:8057. You need to execute git pull and docker compose up -build periodically to get the latest features.

Recommended

Can't find AI tools? Try here!

Just type in the keyword Accessibility Bing SearchYou can quickly find all the AI tools on this site.

Top