Add dual GPU support with web UI selector

Features:
- Built custom ROCm container for AMD RX 6800 GPU
- Added GPU selection toggle in web UI (NVIDIA/AMD)
- Unified model names across both GPUs for seamless switching
- Vision model always uses NVIDIA GPU (optimal performance)
- Text models (llama3.1, darkidol) can use either GPU
- Added /gpu-status and /gpu-select API endpoints
- Implemented GPU state persistence in memory/gpu_state.json

Technical details:
- Multi-stage Dockerfile.llamaswap-rocm with ROCm 6.2.4
- llama.cpp compiled with GGML_HIP=ON for gfx1030 (RX 6800)
- Proper GPU permissions without root (groups 187/989)
- AMD container on port 8091, NVIDIA on port 8090
- Updated bot/utils/llm.py with get_current_gpu_url() and get_vision_gpu_url()
- Modified bot/utils/image_handling.py to always use NVIDIA for vision
- Enhanced web UI with GPU selector button (blue=NVIDIA, red=AMD)

Files modified:
- docker-compose.yml (added llama-swap-amd service)
- bot/globals.py (added LLAMA_AMD_URL)
- bot/api.py (added GPU selection endpoints and helper function)
- bot/utils/llm.py (GPU routing for text models)
- bot/utils/image_handling.py (GPU routing for vision models)
- bot/static/index.html (GPU selector UI)
- llama-swap-rocm-config.yaml (unified model names)

New files:
- Dockerfile.llamaswap-rocm
- bot/memory/gpu_state.json
- bot/utils/gpu_router.py (load balancing utility)
- setup-dual-gpu.sh (setup verification script)
- DUAL_GPU_*.md (documentation files)
This commit is contained in:
2026-01-09 00:03:59 +02:00
parent ed5994ec78
commit 1fc3d74a5b
21 changed files with 2836 additions and 13 deletions

View File

@@ -4,11 +4,38 @@ import aiohttp
import datetime
import globals
import asyncio
import json
import os
from utils.context_manager import get_context_for_response_type, get_complete_context
from utils.moods import load_mood_description
from utils.conversation_history import conversation_history
def get_current_gpu_url():
"""Get the URL for the currently selected GPU for text models"""
gpu_state_file = os.path.join(os.path.dirname(__file__), "..", "memory", "gpu_state.json")
try:
with open(gpu_state_file, "r") as f:
state = json.load(f)
current_gpu = state.get("current_gpu", "nvidia")
if current_gpu == "amd":
return globals.LLAMA_AMD_URL
else:
return globals.LLAMA_URL
except Exception as e:
print(f"⚠️ GPU state read error: {e}, defaulting to NVIDIA")
# Default to NVIDIA if state file doesn't exist
return globals.LLAMA_URL
def get_vision_gpu_url():
"""
Get the URL for vision model inference.
Strategy: Always use NVIDIA GPU for vision to avoid unloading/reloading.
- When NVIDIA is primary: Use NVIDIA for both text and vision
- When AMD is primary: Use AMD for text, NVIDIA for vision (keeps vision loaded)
"""
return globals.LLAMA_URL # Always use NVIDIA for vision
def _strip_surrounding_quotes(text):
"""
Remove surrounding quotes from text if present.
@@ -233,9 +260,13 @@ Please respond in a way that reflects this emotional tone.{pfp_context}"""
async with aiohttp.ClientSession() as session:
try:
# Get current GPU URL based on user selection
llama_url = get_current_gpu_url()
print(f"🎮 Using GPU endpoint: {llama_url}")
# Add timeout to prevent hanging indefinitely
timeout = aiohttp.ClientTimeout(total=300) # 300 second timeout
async with session.post(f"{globals.LLAMA_URL}/v1/chat/completions", json=payload, headers=headers, timeout=timeout) as response:
async with session.post(f"{llama_url}/v1/chat/completions", json=payload, headers=headers, timeout=timeout) as response:
if response.status == 200:
data = await response.json()
reply = data.get("choices", [{}])[0].get("message", {}).get("content", "No response.")