config.yaml nested cheshire_cat and face_detector under the 'services' key,
and llama URLs under 'services.llama'. But AppConfig expects:
- services -> {url, amd_url} (llama endpoints directly)
- cheshire_cat -> top-level key
- face_detector -> top-level key
Because Pydantic silently ignores extra fields, ServicesConfig received
{llama: {...}, cheshire_cat: {...}, face_detector: {...}} and none matched
its 'url'/'amd_url' fields, so ALL service config from YAML was silently
ignored and Pydantic defaults were always used instead.
Flattened services to contain url/amd_url directly, and moved cheshire_cat
and face_detector to top-level keys matching the AppConfig model. Verified
both AppConfig(**yaml_data) and config_manager dot-path traversal work.
56 lines
1.4 KiB
YAML
56 lines
1.4 KiB
YAML
# ============================================
|
|
# Miku Discord Bot - Configuration
|
|
# ============================================
|
|
# This file contains all non-secret configuration
|
|
# Secrets (API keys, tokens) go in .env
|
|
|
|
# Service Endpoints
|
|
services:
|
|
url: http://llama-swap:8080
|
|
amd_url: http://llama-swap-amd:8080
|
|
|
|
cheshire_cat:
|
|
url: http://cheshire-cat:80
|
|
timeout_seconds: 120
|
|
enabled: true # Set to false to disable Cheshire Cat integration
|
|
|
|
face_detector:
|
|
startup_timeout_seconds: 60
|
|
|
|
# AI Models
|
|
models:
|
|
text: llama3.1
|
|
vision: vision
|
|
evil: darkidol # Uncensored model for evil mode
|
|
japanese: swallow # Llama 3.1 Swallow model for Japanese
|
|
|
|
# Discord Bot Settings
|
|
discord:
|
|
language_mode: english # Options: english, japanese
|
|
api_port: 3939 # FastAPI server port
|
|
|
|
# Autonomous System
|
|
autonomous:
|
|
debug_mode: false # Enable detailed decision logging
|
|
# Mood settings can be configured per-server via API
|
|
|
|
# Voice Chat
|
|
voice:
|
|
debug_mode: false # Enable manual commands and notifications
|
|
# When false (production), voice operates silently
|
|
|
|
# Memory & Logging
|
|
memory:
|
|
log_dir: /app/memory/logs
|
|
conversation_history_length: 5 # Messages to keep per user
|
|
|
|
# Server Settings
|
|
server:
|
|
host: 0.0.0.0
|
|
log_level: critical # For uvicorn (access logs handled separately)
|
|
|
|
# GPU Configuration
|
|
gpu:
|
|
prefer_amd: false # Prefer AMD GPU over NVIDIA
|
|
amd_models_enabled: true
|