Add dual GPU support with web UI selector
Features: - Built custom ROCm container for AMD RX 6800 GPU - Added GPU selection toggle in web UI (NVIDIA/AMD) - Unified model names across both GPUs for seamless switching - Vision model always uses NVIDIA GPU (optimal performance) - Text models (llama3.1, darkidol) can use either GPU - Added /gpu-status and /gpu-select API endpoints - Implemented GPU state persistence in memory/gpu_state.json Technical details: - Multi-stage Dockerfile.llamaswap-rocm with ROCm 6.2.4 - llama.cpp compiled with GGML_HIP=ON for gfx1030 (RX 6800) - Proper GPU permissions without root (groups 187/989) - AMD container on port 8091, NVIDIA on port 8090 - Updated bot/utils/llm.py with get_current_gpu_url() and get_vision_gpu_url() - Modified bot/utils/image_handling.py to always use NVIDIA for vision - Enhanced web UI with GPU selector button (blue=NVIDIA, red=AMD) Files modified: - docker-compose.yml (added llama-swap-amd service) - bot/globals.py (added LLAMA_AMD_URL) - bot/api.py (added GPU selection endpoints and helper function) - bot/utils/llm.py (GPU routing for text models) - bot/utils/image_handling.py (GPU routing for vision models) - bot/static/index.html (GPU selector UI) - llama-swap-rocm-config.yaml (unified model names) New files: - Dockerfile.llamaswap-rocm - bot/memory/gpu_state.json - bot/utils/gpu_router.py (load balancing utility) - setup-dual-gpu.sh (setup verification script) - DUAL_GPU_*.md (documentation files)
This commit is contained in:
@@ -20,6 +20,35 @@ services:
|
||||
environment:
|
||||
- NVIDIA_VISIBLE_DEVICES=all
|
||||
|
||||
llama-swap-amd:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile.llamaswap-rocm
|
||||
container_name: llama-swap-amd
|
||||
ports:
|
||||
- "8091:8080" # Map host port 8091 to container port 8080
|
||||
volumes:
|
||||
- ./models:/models # GGUF model files
|
||||
- ./llama-swap-rocm-config.yaml:/app/config.yaml # llama-swap configuration for AMD
|
||||
devices:
|
||||
- /dev/kfd:/dev/kfd
|
||||
- /dev/dri:/dev/dri
|
||||
group_add:
|
||||
- "985" # video group
|
||||
- "989" # render group
|
||||
restart: unless-stopped
|
||||
healthcheck:
|
||||
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 10
|
||||
start_period: 30s # Give more time for initial model loading
|
||||
environment:
|
||||
- HSA_OVERRIDE_GFX_VERSION=10.3.0 # RX 6800 compatibility
|
||||
- ROCM_PATH=/opt/rocm
|
||||
- HIP_VISIBLE_DEVICES=0 # Use first AMD GPU
|
||||
- GPU_DEVICE_ORDINAL=0
|
||||
|
||||
miku-bot:
|
||||
build: ./bot
|
||||
container_name: miku-bot
|
||||
@@ -30,9 +59,12 @@ services:
|
||||
depends_on:
|
||||
llama-swap:
|
||||
condition: service_healthy
|
||||
llama-swap-amd:
|
||||
condition: service_healthy
|
||||
environment:
|
||||
- DISCORD_BOT_TOKEN=MTM0ODAyMjY0Njc3NTc0NjY1MQ.GXsxML.nNCDOplmgNxKgqdgpAomFM2PViX10GjxyuV8uw
|
||||
- LLAMA_URL=http://llama-swap:8080
|
||||
- LLAMA_AMD_URL=http://llama-swap-amd:8080 # Secondary AMD GPU endpoint
|
||||
- TEXT_MODEL=llama3.1
|
||||
- VISION_MODEL=vision
|
||||
- OWNER_USER_ID=209381657369772032 # Your Discord user ID for DM analysis reports
|
||||
|
||||
Reference in New Issue
Block a user