Compare commits

..

22 Commits

Author SHA1 Message Date
9eb081efb1 llama-swap: use pre-built images (:cuda, :rocm) with GPU-specific flags
- Drop custom Dockerfiles; docker-compose uses ghcr.io pre-built images
  which ship llama-swap + llama-server with no pinned versions (always latest)
- NVIDIA GTX 1660 (6GB): add -fit off --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
  to fix OOM segfault with new llama.cpp b9014's GPU-side KV cache default
- AMD RX 6800 (16GB): flags unchanged; KV cache stays on GPU for max speed
- Both running llama-swap v211 + llama.cpp b9014 (2026-05-05)
2026-05-05 16:53:34 +03:00
4e28236b06 fix: preserve collapsible subsection state across polling re-renders
- Use stable section IDs (without Date.now()) so collapse state can be
  tracked across re-renders
- Snapshot collapsed state before innerHTML replacement, restore after
- Prevents the 10s polling from expanding all subsections every time
2026-05-02 16:17:26 +03:00
c5e49c73df fix: add cache-busting to prevent stale JS/CSS from breaking the UI
- Added ?v=20260502 query param to all <script src=...> and <link> tags
- Added Cache-Control: no-cache, no-store, must-revalidate to index route
- Added <meta> cache-control tags in HTML head for extra coverage
- This ensures the browser always fetches fresh HTML/JS/CSS after deploy,
  preventing the old loadLastPrompt() from running against new HTML
  (which would crash since #prompt-cat-info no longer exists)
2026-05-02 16:08:47 +03:00
393921e524 fix: add min-height to #prompt-display and placeholder text in clearPromptDisplay()
The empty #prompt-display div collapsed to 0 height, making it appear
'gone'. Added min-height: 3rem and a 'No prompt selected.' placeholder
that clearPromptDisplay() now sets via innerHTML.
2026-05-02 15:55:19 +03:00
2dd32d0ef1 fix: move <pre> outside #prompt-display to prevent innerHTML from destroying it
The renderPromptEntry() function sets innerHTML on #prompt-display, which
was wiping out the child <pre id="last-prompt"> element. This caused
copyPromptToClipboard() to fail silently and the display to appear empty.

Fix: keep <pre> as a hidden sibling outside #prompt-display, used only as
a text buffer for the copy function.
2026-05-02 15:45:54 +03:00
a980b90c0a fix: escape content in buildCollapsibleSection, avoid double-escaping response 2026-05-02 15:27:18 +03:00
6b922d84ae frontend: rewrite Last Prompt as Prompt History viewer
- status.js: replace loadLastPrompt() with loadPromptHistory() + helpers
  - fetch /prompts with optional source filter, populate dropdown
  - selectPromptEntry() renders metadata bar + collapsible subsections
  - parsePromptSections() splits full_prompt into System/Context/Conversation
  - buildCollapsibleSection() with toggle arrows (▼/▶)
  - copyPromptToClipboard() copies raw text
  - toggleMiddleTruncation() truncates response from middle
  - togglePromptHistoryCollapse() collapses entire section
  - legacy loadLastPrompt() delegates to loadPromptHistory()
- core.js: add promptInterval to polling (10s), visibility resume
  - update switchPromptSource() for 'all' filter + new button IDs
  - update initPromptSourceToggle() default to 'all'
  - declare promptInterval variable
2026-05-02 15:25:05 +03:00
f33e2afdf7 frontend: new Prompt History section HTML + CSS
- Replace single <pre> Last Prompt with rich Prompt History viewer
- Add source filter buttons (All/Cat/Fallback), history dropdown selector
- Add metadata bar, copy-to-clipboard button, middle-truncation toggle
- Add collapsible section CSS classes for expandable subsections
2026-05-02 15:19:10 +03:00
87de8f8b3a backend: replace LAST_FULL_PROMPT/LAST_CAT_INTERACTION with unified PROMPT_HISTORY deque
- globals.py: add collections.deque(maxlen=10) PROMPT_HISTORY with _prompt_id_counter
- globals.py: add legacy accessor functions _get_last_fallback_prompt() and _get_last_cat_interaction()
- bot.py: append to PROMPT_HISTORY instead of setting LAST_CAT_INTERACTION, remove 500-char truncation, add guild/channel/model fields
- image_handling.py: same pattern for Cat media responses
- llm.py: append fallback prompts to PROMPT_HISTORY with response filled after LLM reply
- routes/core.py: new GET /prompts and GET /prompts/{id} endpoints, legacy /prompt and /prompt/cat use accessor functions
2026-05-02 15:17:15 +03:00
2d0c80b7ef fix: prevent infinite dialogue loops + make Evil Miku actually engage
- Question override now decays after 6 turns: after turn 6, the LLM's own
  [CONTINUE] signal is respected even when questions are asked. This prevents
  infinite question-ping-pong where both personas keep asking questions.
- _parse_response now accepts turn_count parameter; generate_response_with_continuation
  and handle_dialogue_turn pass it through.
- Rewrote Evil Miku's conversation-mode overlay with explicit CRITICAL RULES:
  ANSWER questions, engage with what she says, ask questions too, don't just
  repeat dismissive one-liners. The old overlay said 'be playful-cruel' but
  didn't actually tell her to participate in the conversation.
2026-04-30 15:39:53 +03:00
17842f24d4 fix: remove broken personality snippet system — now redundant
The snippet loader used wrong file paths (/app/cat/data/ instead of persona/)
causing 'Loaded 0 personality snippets' for both personas. Since the previous
commit now injects full system prompts (get_miku_system_prompt_compact and
get_evil_system_prompt) into every argument exchange, the snippet system is
redundant — all lore/lyrics/personality are already provided by the system prompts.
2026-04-30 15:16:43 +03:00
4e064ad89b fix: import is_persona_dialogue_active from correct module
Was importing from utils.bipolar_mode instead of utils.persona_dialogue
2026-04-30 15:10:13 +03:00
97c7133fdc fix: both personas now use full system prompts in arguments and dialogues
Created get_miku_system_prompt() and get_miku_system_prompt_compact() in
context_manager.py — mirrors get_evil_system_prompt() so both personas have
equally rich prompts with lore, lyrics, mood integration, and personality.

Previously only Evil Miku had a proper system prompt function. Regular Miku's
arguments and dialogues used a bare-bones hardcoded prompt with no lore/lyrics
— making arguments feel flat compared to normal conversation.

Changes:
- context_manager.py: added get_miku_system_prompt() (full) and
  get_miku_system_prompt_compact() (lore+personality, no lyrics for tokens)
- bipolar_mode.py: both argument prompt functions now accept system_prompt
  param; run_argument() builds miku_system and evil_system once and passes
  them to every exchange
- persona_dialogue.py: dialogue prompts now use get_miku_system_prompt_compact()
  instead of hardcoded stub, matching Evil Miku's full prompt approach
- Removed redundant hardcoded personality text from argument prompts since
  the system prompts now provide it
2026-04-30 15:07:55 +03:00
7d5881ebe7 fix: inject argument topic into EVERY exchange, not just the first message
The topic was only being injected into the initial breakthrough message via
get_argument_start_prompt(). After that, every subsequent exchange called
get_miku_argument_prompt() / get_evil_argument_prompt() which had no concept
of the topic — so both personas forgot what they were arguing about after the
first exchange and reverted to generic identity-crisis arguments.

Fix: added argument_topic parameter to both persona prompt functions and inject
it as a bold ARGUMENT THEME reminder in every single exchange. The topic block
explicitly tells the LLM to stay on-topic and not drift into generic territory.
2026-04-30 12:57:48 +03:00
e6c818f647 fix: merge context + topic into single field — one clear purpose
- Removed separate 'topic' field from BipolarTriggerRequest model
- Removed topic parameter from force_trigger_argument, force_trigger_argument_from_message_id, and run_argument
- trigger_context now doubles as the argument theme: if provided by user, it becomes the topic;
  if blank, a random topic is selected from the rotation pool
- Web UI: replaced two confusing fields (Context + Topic) with one clear field labeled
  'What should they argue about? (optional)' with a plain-English description
- JS: removed topic field reference, context.trim() ensures empty strings aren't sent
2026-04-30 12:30:49 +03:00
846557fa96 feat: add optional custom argument topic override via Web UI
- Added optional 'topic' field to BipolarTriggerRequest model
- Added topic parameter to force_trigger_argument and force_trigger_argument_from_message_id
- Updated run_argument to accept optional custom topic (None=random, ''=no topic, str=custom)
- Added topic input field to Web UI trigger-argument section
- Updated JS to send topic in API request body
- Custom topics bypass the random rotation system, allowing manual theme control
2026-04-30 12:07:28 +03:00
98fca53066 Phase 3: Polish & immersion — mood-aware arguments, personality snippets, parting shots
- Added mood-specific argument behavioral guidance: 9 moods for Evil Miku, 9 for Miku
  Each mood changes argument style (e.g. cunning=chess moves, manic=chaotic, bubbly=playful deflections)
- Added personality snippet injection from Cat plugin lore/lyrics data files
  40% chance per prompt to include a random lore/lyric snippet for unique material
- Added parting shot feature: 20% chance the LOSER gets a bitter final line before the winner's victory
  Adds dramatic tension and prevents clean-win monotony
- Mood guidance and personality flavor injected into both argument prompts
2026-04-30 11:50:37 +03:00
a52b36135f Phase 2: Fix triggers & dialogue — per-channel cooldowns, tension rebalance, user-message triggers
- Changed cooldown from global (ALL channels blocked) to per-channel dict keyed by channel_id
- Added conversation streak tracker: 3 near-miss interjection scores in a row force a dialogue trigger
- Expanded topic relevance keywords: added enthusiasm/vulnerability for Evil Miku, provocation/dismissal for Miku
- Lowered keyword divisor from /3.0 to /2.0 for higher base trigger scores
- Tension rebalance: added natural decay (-0.03/turn), reduced escalation weight (0.08->0.05), increased de-escalation weight (0.06->0.08)
- Reduced momentum multiplier (1.2->1.1) and intensity multiplier (1.3->1.2)
- Added spike cooldown: if last turn tension delta >0.15, next delta halved (prevents runaway spirals)
- Added user-message interjection check in bot.py on_message() (was only checking bot's own messages)
- Added random 15% argument trigger roll on user messages in normal message flow (was only from autonomous.py)
2026-04-30 11:45:13 +03:00
7a4122fd02 Phase 1: Argument system overhaul — arbiter, memory, topics, stats
- Changed arbiter LLM from llama3.1 to darkidol (uncensored, unbiased)
- Rewrote arbiter criteria to judge debate skill equally
- Added argument history injection (last 6 exchanges) to prevent repetition
- Added dynamic topic rotation system (11 weighted topics) with per-channel history
- Added keyword-based argument stats tracking (wit/composure/impact) fed to arbiter
- Removed hardcoded suggestion lists from prompts
2026-04-30 11:37:33 +03:00
20891179ee fix(twitter): update twscrape monkey patch for JS bundle format change
Twitter changed the JS bundle structure from the old single-map format
(e=>e+"."+{...}[e]+"a.js") to a new two-map format
(u.u=e=>""+(({name})[e]||e)+"."+({hash})[e]+"a.js"), breaking
x-client-transaction-id generation.

This caused IndexError: list index out of range, which twscrape
interpreted as an account timeout (15-min lockout), preventing Miku
from fetching/sharing tweets.

The fix adds:
- A robust multi-pattern parser that tries known formats in order
- The _js_obj_to_dict helper from PR #303 for handling unquoted numeric
  keys and scientific notation in JS object literals
- Debug logging to capture the JS snippet when ALL patterns fail,
  making future breakage easier to diagnose

References:
- https://github.com/vladkens/twscrape/issues/302
- https://github.com/vladkens/twscrape/pull/303
2026-04-29 21:32:27 +03:00
694590a620 refactor: Modularize monolithic HTML control panel into organized components
This commit completes a major refactoring of the Miku control panel from a single 7,191-line monolithic HTML file to a modern modular architecture:

CHANGES:
- Extracted 872 lines of CSS into css/style.css
- Created 10 specialized JavaScript modules (4,964 lines total):
  * core.js: Global state, utilities, initialization, polling system
  * servers.js: Server management and mood handling
  * modes.js: Evil mode, GPU selection, bipolar mode, scoreboard
  * actions.js: Autonomous/manual actions, custom prompts, reactions
  * image-gen.js: Image generation system
  * status.js: Status display and statistics
  * dm.js: DM user management and conversation analysis
  * chat.js: LLM chat interface with streaming and voice calls
  * memories.js: Cheshire Cat memory integration (episodic/declarative/procedural)
  * profile.js: Profile picture, album gallery, activities editor
- Cleaned index.html to 1,351 lines (structure only, zero inline JS/CSS)
- Removed 12 duplicate variable declarations
- Maintained strict script load order for dependency resolution
- Added backup comment to index.html.bak for historical reference

VERIFICATION COMPLETED:
✓ All 191 functions/variables from original accounted for
✓ Cross-referenced with backup to ensure nothing lost
✓ All onclick handlers and modal systems validated
✓ No circular dependencies or broken references
✓ HTML structure integrity verified (11 tabs, all buttons/modals intact)
✓ CropperJS CDN links preserved

The refactored code is production-ready with improved maintainability and clear separation of concerns.
2026-04-29 20:56:49 +03:00
6080fe170f Fix all activity system edge cases
Critical fixes:
- Add threading.Lock for all shared mutable state (override, cache, current activity)
- Atomic YAML writes (temp file + os.replace) to prevent corruption on crash
- Deep-copy cache on reads to prevent callers from mutating shared state

High-severity fixes:
- Validate entries in pick_activity_for_mood() — skip/log malformed instead of KeyError
- Log warning on unrecognized activity type fallback
- Normalize empty-string state to None (avoid 'None' display)
- release_manual_override() now uses force=True so bot always shows activity
- Add try/except in release_manual_override() to handle failures gracefully

Medium fixes:
- Remove dead 'test' mood from activities.yaml
- Validate name length (128 char Discord limit) in CRUD and manual set
- Validate streaming entries have URL in CRUD path
- Add JSON parse error handling in API routes
- on_ready preserves active manual override instead of overwriting
- Log override expiry timestamp (HH:MM:SS) for easier debugging
- exc_info=True on presence update errors for full stack traces

Low fixes:
- JS activitySetFromEntry() shows notification on parse error
2026-04-28 00:18:25 +03:00
30 changed files with 7329 additions and 6276 deletions

View File

@@ -1,13 +0,0 @@
FROM ghcr.io/mostlygeek/llama-swap:cuda
USER root
# Download and install llama-server binary (CUDA version)
# Using the official pre-built binary from llama.cpp releases
ADD --chmod=755 https://github.com/ggml-org/llama.cpp/releases/download/b4183/llama-server-cuda /usr/local/bin/llama-server
# Verify it's executable
RUN llama-server --version || echo "llama-server installed successfully"
USER 1000:1000

View File

@@ -1,68 +0,0 @@
# Multi-stage build for llama-swap with ROCm support
# Now using official llama.cpp ROCm image (PR #18439 merged Dec 29, 2025)
# Stage 1: Build llama-swap UI
FROM node:22-alpine AS ui-builder
WORKDIR /build
# Install git
RUN apk add --no-cache git
# Clone llama-swap
RUN git clone https://github.com/mostlygeek/llama-swap.git
# Build UI (now in ui-svelte directory)
WORKDIR /build/llama-swap/ui-svelte
RUN npm install && npm run build
# Stage 2: Build llama-swap binary
FROM golang:1.23-alpine AS swap-builder
WORKDIR /build
# Install git
RUN apk add --no-cache git
# Copy llama-swap source with built UI
COPY --from=ui-builder /build/llama-swap /build/llama-swap
# Build llama-swap binary
WORKDIR /build/llama-swap
RUN GOTOOLCHAIN=auto go build -o /build/llama-swap-binary .
# Stage 3: Final runtime image using official llama.cpp ROCm image
FROM ghcr.io/ggml-org/llama.cpp:server-rocm
WORKDIR /app
# Copy llama-swap binary from builder
COPY --from=swap-builder /build/llama-swap-binary /app/llama-swap
# Make binaries executable
RUN chmod +x /app/llama-swap
# Add existing ubuntu user (UID 1000) to GPU access groups (using host GIDs)
# GID 187 = render group on host, GID 989 = video/kfd group on host
RUN groupadd -g 187 hostrender && \
groupadd -g 989 hostvideo && \
usermod -aG hostrender,hostvideo ubuntu && \
chown -R ubuntu:ubuntu /app
# Set environment for ROCm (RX 6800 is gfx1030)
ENV HSA_OVERRIDE_GFX_VERSION=10.3.0
ENV ROCM_PATH=/opt/rocm
ENV HIP_VISIBLE_DEVICES=0
USER ubuntu
# Expose port
EXPOSE 8080
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Override the base image's ENTRYPOINT and run llama-swap
ENTRYPOINT []
CMD ["/app/llama-swap", "-config", "/app/config.yaml", "-listen", "0.0.0.0:8080"]

View File

@@ -505,10 +505,6 @@ normal:
name: Gintama name: Gintama
weight: 1 weight: 1
state: Comedy Anime state: Comedy Anime
test:
- type: playing
name: G
weight: 2
evil: evil:
aggressive: aggressive:
- type: listening - type: listening

View File

@@ -138,8 +138,11 @@ async def on_ready():
# Set initial Discord presence based on current mood # Set initial Discord presence based on current mood
try: try:
from utils.activities import update_bot_presence from utils.activities import update_bot_presence, is_manual_override_active
if globals.EVIL_MODE: # On reconnect, don't overwrite an active manual override
if is_manual_override_active():
logger.info("Manual override active on ready, preserving it")
elif globals.EVIL_MODE:
await update_bot_presence(globals.EVIL_DM_MOOD, is_evil=True, force=True) await update_bot_presence(globals.EVIL_DM_MOOD, is_evil=True, force=True)
else: else:
await update_bot_presence(globals.DM_MOOD, is_evil=False, force=True) await update_bot_presence(globals.DM_MOOD, is_evil=False, force=True)
@@ -200,6 +203,31 @@ async def on_message(message):
if is_persona_dialogue_active(message.channel.id): if is_persona_dialogue_active(message.channel.id):
return return
# Bipolar mode: check if the opposite persona should interject on user messages
# AND roll for random argument trigger (both non-blocking background tasks)
if not isinstance(message.channel, discord.DMChannel) and globals.BIPOLAR_MODE:
try:
from utils.persona_dialogue import check_for_interjection, is_persona_dialogue_active as dialogue_active
from utils.bipolar_mode import maybe_trigger_argument, is_argument_in_progress as arg_in_progress
from utils.task_tracker import create_tracked_task
# Check interjection on user messages (opposite of current active persona)
if not message.author.bot or message.webhook_id:
current_persona = "evil" if globals.EVIL_MODE else "miku"
create_tracked_task(
check_for_interjection(message, current_persona),
task_name="interjection_check_user",
)
# Roll random argument trigger chance (15%) on eligible messages
if not arg_in_progress(message.channel.id) and not dialogue_active(message.channel.id):
create_tracked_task(
maybe_trigger_argument(message.channel, globals.client, "Triggered from conversation flow"),
task_name="random_argument_trigger",
)
except Exception as e:
logger.error(f"Error in bipolar trigger checks: {e}")
if message.content.strip().lower() == "miku, rape this nigga balls" and message.reference: if message.content.strip().lower() == "miku, rape this nigga balls" and message.reference:
async with message.channel.typing(): async with message.channel.typing():
# Get replied-to user # Get replied-to user
@@ -332,15 +360,24 @@ async def on_message(message):
if globals.EVIL_MODE: if globals.EVIL_MODE:
effective_mood = f"EVIL:{getattr(globals, 'EVIL_DM_MOOD', 'evil_neutral')}" effective_mood = f"EVIL:{getattr(globals, 'EVIL_DM_MOOD', 'evil_neutral')}"
logger.info(f"🐱 Cat response for {author_name} (mood: {effective_mood})") logger.info(f"🐱 Cat response for {author_name} (mood: {effective_mood})")
# Track Cat interaction for Web UI Last Prompt view # Track Cat interaction in unified prompt history
import datetime import datetime
globals.LAST_CAT_INTERACTION = { globals._prompt_id_counter += 1
guild_name = message.guild.name if message.guild else "DM"
channel_name = message.channel.name if message.guild else "DM"
globals.PROMPT_HISTORY.append({
"id": globals._prompt_id_counter,
"source": "cat",
"full_prompt": cat_full_prompt, "full_prompt": cat_full_prompt,
"response": response[:500] if response else "", "response": response if response else "",
"user": author_name, "user": author_name,
"mood": effective_mood, "mood": effective_mood,
"guild": guild_name,
"channel": channel_name,
"timestamp": datetime.datetime.now().isoformat(), "timestamp": datetime.datetime.now().isoformat(),
} "model": "Cat LLM",
"response_type": response_type,
})
except Exception as e: except Exception as e:
logger.warning(f"🐱 Cat pipeline error, falling back to query_llama: {e}") logger.warning(f"🐱 Cat pipeline error, falling back to query_llama: {e}")
response = None response = None

View File

@@ -1,6 +1,7 @@
# globals.py # globals.py
import os import os
import discord import discord
from collections import deque
from apscheduler.schedulers.asyncio import AsyncIOScheduler from apscheduler.schedulers.asyncio import AsyncIOScheduler
scheduler = AsyncIOScheduler() scheduler = AsyncIOScheduler()
@@ -77,16 +78,25 @@ MIKU_NORMAL_AVATAR_URL = None # Cached CDN URL of the regular Miku pfp (valid e
BOT_USER = None BOT_USER = None
LAST_FULL_PROMPT = "" # Unified prompt history (replaces LAST_FULL_PROMPT and LAST_CAT_INTERACTION)
# Each entry: {id, source, full_prompt, response, user, mood, guild, channel,
# timestamp, model, response_type}
PROMPT_HISTORY = deque(maxlen=10)
_prompt_id_counter = 0
# Cheshire Cat last interaction tracking (for Web UI Last Prompt toggle) # Legacy accessors for backward compatibility (routes, CLI, etc.)
LAST_CAT_INTERACTION = { # These are computed properties that read from PROMPT_HISTORY
"full_prompt": "", def _get_last_fallback_prompt():
"response": "", for entry in reversed(PROMPT_HISTORY):
"user": "", if entry.get("source") == "fallback":
"mood": "", return entry.get("full_prompt", "")
"timestamp": "", return ""
}
def _get_last_cat_interaction():
for entry in reversed(PROMPT_HISTORY):
if entry.get("source") == "cat":
return entry
return {"full_prompt": "", "response": "", "user": "", "mood": "", "timestamp": ""}
# Persona Dialogue System (conversations between Miku and Evil Miku) # Persona Dialogue System (conversations between Miku and Evil Miku)
LAST_PERSONA_DIALOGUE_TIME = 0 # Timestamp of last dialogue for cooldown LAST_PERSONA_DIALOGUE_TIME = 0 # Timestamp of last dialogue for cooldown

View File

@@ -41,7 +41,11 @@ async def set_mood_activities(section: str, mood: str, request: Request):
if section not in ("normal", "evil"): if section not in ("normal", "evil"):
return JSONResponse(status_code=400, content={"error": "Section must be 'normal' or 'evil'"}) return JSONResponse(status_code=400, content={"error": "Section must be 'normal' or 'evil'"})
data = await request.json() try:
data = await request.json()
except Exception:
return JSONResponse(status_code=400, content={"error": "Invalid JSON body"})
activities = data.get("activities") activities = data.get("activities")
if activities is None: if activities is None:
@@ -97,12 +101,24 @@ async def set_current_activity(request: Request):
Body: {"type": "listening"|"playing"|"watching"|"competing"|"streaming", Body: {"type": "listening"|"playing"|"watching"|"competing"|"streaming",
"name": "...", "state": "..." (optional), "url": "..." (required for streaming)} "name": "...", "state": "..." (optional), "url": "..." (required for streaming)}
""" """
data = await request.json() try:
data = await request.json()
except Exception:
return JSONResponse(status_code=400, content={"error": "Invalid JSON body"})
activity_type = data.get("type", "").lower().strip() activity_type = data.get("type", "").lower().strip()
name = data.get("name", "").strip() name = data.get("name", "").strip()
state = data.get("state") or None state = data.get("state") or None
url = data.get("url") or None url = data.get("url") or None
# Pre-validate before passing to activity module
if not activity_type:
return JSONResponse(status_code=400, content={"error": "'type' is required"})
if not name:
return JSONResponse(status_code=400, content={"error": "'name' is required"})
if len(name) > 128:
return JSONResponse(status_code=400, content={"error": f"'name' exceeds 128 characters ({len(name)})"})
try: try:
from utils.activities import set_activity_manual from utils.activities import set_activity_manual
await set_activity_manual(activity_type, name, state=state, url=url) await set_activity_manual(activity_type, name, state=state, url=url)

View File

@@ -148,7 +148,7 @@ def trigger_argument(data: BipolarTriggerRequest):
if not channel: if not channel:
return JSONResponse(status_code=404, content={"status": "error", "message": f"Channel {channel_id} not found"}) return JSONResponse(status_code=404, content={"status": "error", "message": f"Channel {channel_id} not found"})
# Trigger the argument # Trigger the argument — context doubles as the argument theme
globals.client.loop.create_task(force_trigger_argument(channel, globals.client, data.context)) globals.client.loop.create_task(force_trigger_argument(channel, globals.client, data.context))
return { return {

View File

@@ -14,7 +14,8 @@ router = APIRouter()
@router.get("/") @router.get("/")
def read_index(): def read_index():
return FileResponse("static/index.html") headers = {"Cache-Control": "no-cache, no-store, must-revalidate"}
return FileResponse("static/index.html", headers=headers)
@router.get("/logs") @router.get("/logs")
@@ -31,18 +32,45 @@ def get_logs():
@router.get("/prompt") @router.get("/prompt")
def get_last_prompt(): def get_last_prompt():
return {"prompt": globals.LAST_FULL_PROMPT or "No prompt has been issued yet."} """Legacy endpoint: returns the most recent fallback prompt (backward compat)."""
prompt_text = globals._get_last_fallback_prompt()
return {"prompt": prompt_text or "No prompt has been issued yet."}
@router.get("/prompt/cat") @router.get("/prompt/cat")
def get_last_cat_prompt(): def get_last_cat_prompt():
"""Get the last Cheshire Cat interaction (full prompt + response) for Web UI.""" """Legacy endpoint: returns the most recent Cat interaction (backward compat)."""
interaction = globals.LAST_CAT_INTERACTION interaction = globals._get_last_cat_interaction()
if not interaction.get("full_prompt"): if not interaction.get("full_prompt"):
return {"full_prompt": "No Cheshire Cat interaction has occurred yet.", "response": "", "user": "", "mood": "", "timestamp": ""} return {"full_prompt": "No Cheshire Cat interaction has occurred yet.",
"response": "", "user": "", "mood": "", "timestamp": ""}
return interaction return interaction
@router.get("/prompts")
def get_prompt_history(source: str = None):
"""
Return the unified prompt history.
Optional query param ?source=cat or ?source=fallback to filter.
"""
history = list(globals.PROMPT_HISTORY)
if source and source in ("cat", "fallback"):
history = [e for e in history if e.get("source") == source]
return {"history": history}
@router.get("/prompts/{prompt_id}")
def get_prompt_by_id(prompt_id: int):
"""Return a single prompt history entry by ID."""
for entry in globals.PROMPT_HISTORY:
if entry.get("id") == prompt_id:
return entry
return JSONResponse(
status_code=404,
content={"status": "error", "message": f"Prompt #{prompt_id} not found"}
)
@router.get("/status") @router.get("/status")
def status(): def status():
# Get per-server mood summary # Get per-server mood summary

View File

@@ -45,7 +45,7 @@ class LogFilterUpdateRequest(BaseModel):
class BipolarTriggerRequest(BaseModel): class BipolarTriggerRequest(BaseModel):
channel_id: str # String to handle large Discord IDs from JS channel_id: str # String to handle large Discord IDs from JS
message_id: str = None # Optional: starting message ID (string) message_id: str = None # Optional: starting message ID (string)
context: str = "" context: str = "" # Optional: argument theme/context — tells them what to argue about
class ManualCropRequest(BaseModel): class ManualCropRequest(BaseModel):

917
bot/static/css/style.css Normal file
View File

@@ -0,0 +1,917 @@
body {
margin: 0;
display: flex;
font-family: monospace;
background-color: #121212;
color: #fff;
}
.panel {
width: 60%;
padding: 2rem;
box-sizing: border-box;
}
.logs {
width: 40%;
height: 100vh;
background-color: #000;
color: #0f0;
padding: 1rem;
overflow-y: scroll;
font-size: 0.85rem;
border-left: 2px solid #333;
position: relative;
}
#logs-content {
white-space: pre-wrap;
word-break: break-word;
}
.log-line { line-height: 1.4; }
.log-line.log-error { color: #ff6b6b; }
.log-line.log-warning { color: #ffd93d; }
.log-line.log-info { color: #0f0; }
.log-line.log-debug { color: #888; }
.logs-paused-indicator {
position: sticky;
top: 0;
background: rgba(50, 50, 0, 0.9);
color: #ffd93d;
text-align: center;
padding: 0.25rem;
font-size: 0.75rem;
cursor: pointer;
z-index: 10;
display: none;
}
select, button, input {
margin: 0.4rem 0.5rem 0.4rem 0;
padding: 0.4rem;
background: #333;
color: #fff;
border: 1px solid #555;
}
.section {
margin-bottom: 2rem;
}
pre {
white-space: pre-wrap;
background: #1e1e1e;
padding: 1rem;
border: 1px solid #333;
}
h1, h3 {
color: #61dafb;
}
#notification {
position: fixed;
bottom: 20px;
right: 20px;
background-color: #222;
color: #fff;
padding: 1rem;
border: 1px solid #555;
border-radius: 8px;
opacity: 0.95;
display: none;
z-index: 3000;
font-size: 0.9rem;
transition: opacity 0.3s ease;
}
.server-card {
background: #2a2a2a;
border: 1px solid #444;
border-radius: 8px;
padding: 1rem;
margin-bottom: 1rem;
}
.server-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 1rem;
}
.server-name {
font-size: 1.2rem;
font-weight: bold;
color: #61dafb;
}
.server-actions {
display: flex;
gap: 0.5rem;
}
.feature-tag {
display: inline-block;
background: #444;
padding: 0.2rem 0.5rem;
margin: 0.2rem;
border-radius: 4px;
font-size: 0.8rem;
}
.add-server-form {
background: #1e1e1e;
border: 1px solid #333;
padding: 1rem;
margin: 1rem 0;
border-radius: 8px;
}
.form-row {
display: flex;
gap: 1rem;
margin-bottom: 1rem;
align-items: center;
}
.form-group {
flex: 1;
}
.form-group label {
display: block;
margin-bottom: 0.5rem;
color: #ccc;
}
.checkbox-group {
display: flex;
gap: 1rem;
flex-wrap: wrap;
}
.checkbox-item {
display: flex;
align-items: center;
gap: 0.5rem;
}
.dm-users-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
gap: 1rem;
margin-top: 1rem;
}
.dm-user-card {
background: #2a2a2a;
border: 1px solid #444;
border-radius: 8px;
padding: 1rem;
transition: all 0.3s ease;
}
.dm-user-card:hover {
border-color: #666;
box-shadow: 0 4px 8px rgba(0,0,0,0.3);
}
.dm-user-card h4 {
margin: 0 0 0.5rem 0;
color: #4CAF50;
}
.dm-user-card p {
margin: 0.25rem 0;
font-size: 0.9rem;
}
.dm-user-actions {
margin-top: 1rem;
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
/* Blocked Users Styles */
.blocked-users-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(300px, 1fr));
gap: 1rem;
margin-top: 1rem;
}
.blocked-user-card {
background: #3d2a2a;
border: 1px solid #664444;
border-radius: 8px;
padding: 1rem;
transition: all 0.3s ease;
}
.blocked-user-card:hover {
border-color: #886666;
box-shadow: 0 4px 8px rgba(0,0,0,0.3);
}
.blocked-user-card h4 {
margin: 0 0 0.5rem 0;
color: #ff9800;
}
.blocked-user-card p {
margin: 0.25rem 0;
font-size: 0.9rem;
}
.blocked-user-actions {
margin-top: 1rem;
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
}
/* Conversation View Styles */
.message-reactions {
margin-top: 0.5rem;
display: flex;
flex-wrap: wrap;
gap: 0.4rem;
}
.reaction-item {
display: inline-flex;
align-items: center;
gap: 0.3rem;
background: rgba(255,255,255,0.08);
border: 1px solid rgba(255,255,255,0.15);
border-radius: 12px;
padding: 0.2rem 0.5rem;
font-size: 0.85rem;
transition: background 0.2s ease;
}
.reaction-item:hover {
background: rgba(255,255,255,0.12);
}
.reaction-emoji {
font-size: 1rem;
}
.reaction-by {
color: #aaa;
font-size: 0.75rem;
}
.reaction-by.bot-reaction {
color: #61dafb;
}
.reaction-by.user-reaction {
color: #ffa726;
}
.attachment {
margin: 0.25rem 0;
}
.delete-message-btn {
opacity: 0.7;
transition: opacity 0.3s ease;
}
.delete-message-btn:hover {
opacity: 1;
}
.dm-user-actions button {
padding: 0.5rem 0.75rem;
font-size: 0.8rem;
}
.conversation-view {
background: #2a2a2a;
border: 1px solid #444;
border-radius: 8px;
padding: 1rem;
}
.conversations-list {
max-height: 600px;
overflow-y: auto;
margin-top: 1rem;
}
.conversation-message {
background: #333;
border: 1px solid #555;
border-radius: 6px;
padding: 0.75rem;
margin-bottom: 0.75rem;
}
.conversation-message.user-message {
border-left: 4px solid #4CAF50;
}
.conversation-message.bot-message {
border-left: 4px solid #2196F3;
}
.message-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
font-size: 0.9rem;
}
.sender {
font-weight: bold;
}
.timestamp {
color: #888;
font-size: 0.8rem;
}
.message-content {
margin-bottom: 0.5rem;
line-height: 1.4;
}
.message-attachments {
background: #444;
border-radius: 4px;
padding: 0.5rem;
font-size: 0.9rem;
}
.attachment {
margin: 0.25rem 0;
display: flex;
justify-content: space-between;
align-items: center;
}
.attachment a {
color: #4CAF50;
text-decoration: none;
}
.attachment a:hover {
text-decoration: underline;
}
/* Tab styling */
.tab-container {
margin-bottom: 1rem;
}
.tab-buttons {
display: grid;
grid-template-rows: repeat(2, auto);
grid-auto-flow: column;
grid-auto-columns: max-content;
border-bottom: 2px solid #333;
margin-bottom: 1rem;
overflow-x: auto;
overflow-y: hidden;
scrollbar-width: thin;
scrollbar-color: #555 #222;
row-gap: 0.05rem;
column-gap: 0.1rem;
padding-bottom: 0.1rem;
}
.tab-buttons::-webkit-scrollbar {
height: 8px;
}
.tab-buttons::-webkit-scrollbar-track {
background: #222;
}
.tab-buttons::-webkit-scrollbar-thumb {
background: #555;
border-radius: 4px;
}
.tab-buttons::-webkit-scrollbar-thumb:hover {
background: #666;
}
.tab-button {
background: #222;
color: #ccc;
border: none;
padding: 0.5rem 1rem;
cursor: pointer;
border-bottom: 3px solid transparent;
transition: all 0.3s ease;
white-space: nowrap;
}
.tab-button:hover {
background: #333;
color: #fff;
}
.tab-button.active {
background: #444;
color: #fff;
border-bottom-color: #4CAF50;
}
/* Prompt source toggle buttons */
.prompt-source-btn {
background: #333;
color: #aaa;
}
.prompt-source-btn.active {
background: #4CAF50;
color: #fff;
}
.prompt-source-btn:hover:not(.active) {
background: #444;
color: #ddd;
}
/* Prompt History Section */
#prompt-history-section.collapsed #prompt-history-body {
display: none;
}
#prompt-history-toggle {
user-select: none;
transition: color 0.2s;
}
#prompt-history-toggle:hover {
color: #4CAF50;
}
#prompt-metadata span {
white-space: nowrap;
}
#prompt-metadata .prompt-meta-label {
color: #666;
}
#prompt-metadata .prompt-meta-value {
color: #ccc;
}
#prompt-display pre {
margin: 0;
}
.prompt-subsection-header {
cursor: pointer;
user-select: none;
padding: 0.3rem 0.5rem;
border-radius: 4px;
background: #2a2a2a;
margin: 0.5rem 0 0.25rem 0;
font-size: 0.82rem;
color: #aaa;
transition: background 0.15s;
}
.prompt-subsection-header:hover {
background: #333;
color: #ddd;
}
.prompt-subsection-body.collapsed {
display: none;
}
#prompt-truncate-toggle {
accent-color: #4CAF50;
}
/* Mood Activities Editor */
.act-mood-row {
margin-bottom: 0.5rem;
border: 1px solid #3a3a3a;
border-radius: 4px;
overflow: hidden;
}
.act-mood-header {
cursor: pointer;
user-select: none;
padding: 0.5rem 0.75rem;
background: #2a2a2a;
display: flex;
align-items: center;
gap: 0.5rem;
}
.act-mood-header:hover { background: #333; }
.act-mood-header .act-mood-name { font-weight: bold; min-width: 120px; }
.act-mood-header .act-mood-stats { color: #888; font-size: 0.8rem; }
.act-mood-content { display: none; padding: 0.75rem; background: #1e1e1e; }
.act-entry {
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.35rem 0;
border-bottom: 1px solid #333;
}
.act-entry:last-child { border-bottom: none; }
.act-entry-icon { font-size: 1.1rem; min-width: 24px; text-align: center; }
.act-entry input[type="text"] { flex: 1; }
.act-entry input[type="number"] { width: 55px; }
.act-entry select { width: 130px; }
.act-toolbar {
display: flex;
gap: 0.5rem;
margin-top: 0.5rem;
padding-top: 0.5rem;
border-top: 1px solid #444;
}
.tab-content {
display: none;
}
.tab-content.active {
display: block;
}
/* Tab loading spinner */
.tab-loading-overlay {
display: flex;
align-items: center;
justify-content: center;
padding: 3rem 1rem;
color: #888;
font-size: 1rem;
gap: 0.75rem;
}
.tab-loading-overlay .spinner {
width: 24px;
height: 24px;
border: 3px solid #444;
border-top-color: #4CAF50;
border-radius: 50%;
animation: spin 0.8s linear infinite;
}
@keyframes spin {
to { transform: rotate(360deg); }
}
/* Chat Interface Styles */
.chat-message {
margin-bottom: 1rem;
padding: 1rem;
border-radius: 8px;
animation: fadeIn 0.3s ease-in;
}
@keyframes fadeIn {
from { opacity: 0; transform: translateY(10px); }
to { opacity: 1; transform: translateY(0); }
}
.chat-message.user-message {
background: #2a3a4a;
border-left: 4px solid #4CAF50;
margin-left: 2rem;
}
.chat-message.assistant-message {
background: #3a2a3a;
border-left: 4px solid #61dafb;
margin-right: 2rem;
}
.chat-message.error-message {
background: #4a2a2a;
border-left: 4px solid #f44336;
}
.chat-message-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 0.5rem;
font-size: 0.9rem;
}
.chat-message-sender {
font-weight: bold;
color: #61dafb;
}
.chat-message.user-message .chat-message-sender {
color: #4CAF50;
}
.chat-message-time {
color: #888;
font-size: 0.8rem;
}
.chat-message-content {
color: #ddd;
line-height: 1.5;
white-space: pre-wrap;
word-wrap: break-word;
}
.chat-typing-indicator {
display: inline-flex;
align-items: center;
gap: 0.3rem;
padding: 0.5rem;
}
.chat-typing-indicator span {
width: 8px;
height: 8px;
background: #61dafb;
border-radius: 50%;
animation: typing 1.4s infinite;
}
.chat-typing-indicator span:nth-child(2) {
animation-delay: 0.2s;
}
.chat-typing-indicator span:nth-child(3) {
animation-delay: 0.4s;
}
@keyframes typing {
0%, 60%, 100% { transform: translateY(0); opacity: 0.7; }
30% { transform: translateY(-10px); opacity: 1; }
}
#chat-messages::-webkit-scrollbar {
width: 8px;
}
#chat-messages::-webkit-scrollbar-track {
background: #1e1e1e;
}
#chat-messages::-webkit-scrollbar-thumb {
background: #555;
border-radius: 4px;
}
#chat-messages::-webkit-scrollbar-thumb:hover {
background: #666;
}
/* Evil Mode Styles */
body.evil-mode h1, body.evil-mode h3 {
color: #ff4444;
}
body.evil-mode .tab-button.active {
border-bottom-color: #ff4444;
}
body.evil-mode #evil-mode-toggle {
background: #ff4444;
border-color: #ff4444;
color: #000;
}
body.evil-mode .server-name {
color: #ff4444;
}
body.evil-mode .chat-message-sender {
color: #ff4444;
}
body.evil-mode .chat-message.assistant-message {
border-left-color: #ff4444;
}
body.evil-mode #notification {
border-color: #ff4444;
}
/* Override any blue status text in evil mode */
body.evil-mode [style*="color: #007bff"],
body.evil-mode [style*="color: rgb(0, 123, 255)"] {
color: #ff4444 !important;
}
/* Bipolar Mode Styles */
#bipolar-section {
transition: all 0.3s ease;
}
#bipolar-section h3 {
margin-top: 0;
}
#bipolar-mode-toggle.bipolar-active {
background: #9932CC !important;
border-color: #9932CC !important;
}
/* Responsive breakpoints */
@media (max-width: 1200px) {
.panel { width: 55%; padding: 1.5rem; }
.logs { width: 45%; }
}
@media (max-width: 1024px) {
body { flex-direction: column; }
.panel { width: 100%; padding: 1.5rem; }
.logs {
width: 100%;
height: 300px;
border-left: none;
border-top: 2px solid #333;
}
}
@media (max-width: 768px) {
.panel { padding: 1rem; }
.tab-buttons {
grid-template-rows: none;
grid-auto-flow: row;
grid-template-columns: repeat(auto-fill, minmax(130px, 1fr));
}
.tab-button { font-size: 0.85rem; padding: 0.4rem 0.6rem; }
}
@media (max-width: 480px) {
.panel { padding: 0.5rem; }
.tab-buttons { grid-template-columns: 1fr 1fr; }
.tab-button { font-size: 0.8rem; padding: 0.35rem 0.5rem; }
h1 { font-size: 1.2rem; }
}
/* Profile Picture Tab Styles */
.pfp-preview-container {
display: flex;
gap: 2rem;
margin: 1.5rem 0;
align-items: flex-start;
flex-wrap: wrap;
}
.pfp-preview-box {
text-align: center;
}
.pfp-preview-box img {
max-width: 400px;
max-height: 400px;
border: 2px solid #444;
border-radius: 8px;
background: #1e1e1e;
}
.pfp-preview-box .label {
display: block;
margin-bottom: 0.5rem;
color: #aaa;
font-size: 0.9rem;
}
.pfp-crop-container {
max-width: 100%;
max-height: 550px;
background: #111;
border: 2px solid #555;
border-radius: 8px;
overflow: hidden;
margin: 1rem 0;
}
.pfp-crop-container img {
display: block;
max-width: 100%;
}
.crop-mode-toggle {
display: flex;
gap: 1.5rem;
margin: 1rem 0;
align-items: center;
}
.crop-mode-toggle label {
display: flex;
align-items: center;
gap: 0.4rem;
cursor: pointer;
color: #ccc;
}
.crop-mode-toggle input[type="radio"] {
accent-color: #4CAF50;
}
.pfp-description-editor {
width: 100%;
min-height: 120px;
background: #1e1e1e;
color: #ddd;
border: 1px solid #444;
border-radius: 4px;
padding: 0.75rem;
font-family: monospace;
font-size: 0.9rem;
resize: vertical;
}
.pfp-description-editor:focus {
border-color: #61dafb;
outline: none;
}
/* Album / Gallery grid */
.album-section {
margin: 1.5rem 0;
padding: 1rem;
background: #1a1a2e;
border: 1px solid #444;
border-radius: 8px;
}
.album-header {
display: flex;
justify-content: space-between;
align-items: center;
cursor: pointer;
user-select: none;
}
.album-header h4 { margin: 0; }
.album-toolbar {
display: flex;
gap: 0.5rem;
flex-wrap: wrap;
align-items: center;
margin: 0.75rem 0;
}
.album-grid {
display: grid;
grid-template-columns: repeat(auto-fill, minmax(120px, 1fr));
gap: 0.75rem;
max-height: 480px;
overflow-y: auto;
padding: 0.25rem;
}
.album-card {
position: relative;
border: 2px solid #444;
border-radius: 6px;
overflow: hidden;
cursor: pointer;
transition: border-color 0.15s, box-shadow 0.15s;
background: #111;
}
.album-card:hover { border-color: #61dafb; }
.album-card.selected { border-color: #4CAF50; box-shadow: 0 0 8px rgba(76,175,80,0.4); }
.album-card.checked { border-color: #ff9800; }
.album-card img {
width: 100%;
aspect-ratio: 1;
object-fit: cover;
display: block;
}
.album-card .album-check {
position: absolute;
top: 4px;
left: 4px;
z-index: 2;
accent-color: #ff9800;
}
.album-card .album-card-info {
position: absolute;
bottom: 0;
left: 0;
right: 0;
background: rgba(0,0,0,0.7);
padding: 2px 4px;
font-size: 0.7rem;
color: #ccc;
white-space: nowrap;
overflow: hidden;
text-overflow: ellipsis;
}
.album-card .color-dot {
display: inline-block;
width: 10px;
height: 10px;
border-radius: 50%;
border: 1px solid #888;
vertical-align: middle;
margin-right: 3px;
}
.album-detail {
margin-top: 1rem;
padding: 1rem;
background: #222;
border: 1px solid #555;
border-radius: 8px;
}
.album-detail-previews {
display: flex;
gap: 1.5rem;
flex-wrap: wrap;
align-items: flex-start;
margin: 1rem 0;
}
.album-detail-previews .pfp-preview-box img {
max-width: 300px;
max-height: 300px;
}
.album-disk-usage {
font-size: 0.8rem;
color: #888;
margin-left: auto;
}

File diff suppressed because it is too large Load Diff

432
bot/static/js/actions.js Normal file
View File

@@ -0,0 +1,432 @@
// ============================================================================
// Miku Control Panel — Actions Module
// Autonomous actions, manual actions, custom prompts, reactions
// ============================================================================
// ===== Autonomous Actions =====
async function triggerAutonomous(actionType) {
const selectedServer = document.getElementById('server-select').value;
if (!actionType) {
showNotification('No action type specified', 'error');
return;
}
try {
let endpoint = `/autonomous/${actionType}`;
if (selectedServer !== 'all') {
endpoint += `?guild_id=${selectedServer}`;
}
const result = await apiCall(endpoint, 'POST');
showNotification(result.message || 'Action triggered successfully');
} catch (error) {
console.error('Failed to trigger autonomous action:', error);
}
}
function toggleEngageSubmenu() {
const submenu = document.getElementById('engage-submenu');
submenu.style.display = submenu.style.display === 'none' ? 'block' : 'none';
}
async function triggerEngageUser() {
const selectedServer = document.getElementById('server-select').value;
const userId = document.getElementById('engage-user-id').value.trim();
const engageType = document.querySelector('input[name="engage-type"]:checked').value;
try {
let endpoint = '/autonomous/engage';
const params = new URLSearchParams();
if (selectedServer !== 'all') {
params.append('guild_id', selectedServer);
}
if (userId) {
params.append('user_id', userId);
}
if (engageType !== 'random') {
params.append('engagement_type', engageType);
}
params.append('manual_trigger', 'true');
if (params.toString()) {
endpoint += `?${params.toString()}`;
}
const result = await apiCall(endpoint, 'POST');
showNotification(result.message || 'Engagement triggered successfully');
} catch (error) {
console.error('Failed to trigger user engagement:', error);
}
}
function toggleTweetSubmenu() {
const submenu = document.getElementById('tweet-submenu');
submenu.style.display = submenu.style.display === 'none' ? 'block' : 'none';
}
async function triggerShareTweet() {
const selectedServer = document.getElementById('server-select').value;
const tweetUrl = document.getElementById('tweet-url').value.trim();
if (tweetUrl) {
const validDomains = ['x.com', 'twitter.com', 'fxtwitter.com'];
let isValid = false;
try {
const urlObj = new URL(tweetUrl);
const hostname = urlObj.hostname.toLowerCase();
isValid = validDomains.some(domain => hostname === domain || hostname.endsWith('.' + domain));
} catch (e) {}
if (!isValid) {
showNotification('Invalid tweet URL. Must be from x.com, twitter.com, or fxtwitter.com', 'error');
return;
}
}
try {
let endpoint = '/autonomous/tweet';
const params = new URLSearchParams();
if (selectedServer !== 'all') {
params.append('guild_id', selectedServer);
}
if (tweetUrl) {
params.append('tweet_url', tweetUrl);
}
if (params.toString()) {
endpoint += `?${params.toString()}`;
}
const result = await apiCall(endpoint, 'POST');
showNotification(result.message || 'Tweet share triggered successfully');
} catch (error) {
console.error('Failed to trigger tweet share:', error);
}
}
// ===== Manual Actions =====
async function forceSleep() {
try {
await apiCall('/sleep', 'POST');
showNotification('Miku is now sleeping');
} catch (error) {
console.error('Failed to force sleep:', error);
}
}
async function wakeUp() {
try {
await apiCall('/wake', 'POST');
showNotification('Miku is now awake');
} catch (error) {
console.error('Failed to wake up:', error);
}
}
async function sendBedtime() {
const selectedServer = document.getElementById('manual-server-select').value;
console.log('🛏️ sendBedtime() called');
console.log('🛏️ Selected server value:', selectedServer);
try {
let endpoint = '/bedtime';
if (selectedServer !== 'all') {
console.log('🛏️ Using guild_id (as string):', selectedServer);
endpoint += `?guild_id=${selectedServer}`;
}
console.log('🛏️ Final endpoint:', endpoint);
const result = await apiCall(endpoint, 'POST');
showNotification(result.message || 'Bedtime reminder sent successfully');
} catch (error) {
console.error('Failed to send bedtime reminder:', error);
}
}
async function resetConversation() {
const userId = prompt('Enter user ID to reset conversation for:');
if (userId) {
try {
await apiCall('/conversation/reset', 'POST', { user_id: userId });
showNotification('Conversation reset');
} catch (error) {
console.error('Failed to reset conversation:', error);
}
}
}
// ===== Manual Message =====
async function sendManualMessage() {
const message = document.getElementById('manualMessage').value.trim();
const files = document.getElementById('manualAttachment').files;
const targetType = document.getElementById('manual-target-type').value;
const replyMessageId = document.getElementById('manualReplyMessageId').value.trim();
const replyMention = document.querySelector('input[name="manualReplyMention"]:checked').value === 'true';
const useWebhook = document.getElementById('manual-use-webhook').checked;
const webhookPersona = document.querySelector('input[name="webhook-persona"]:checked')?.value || 'miku';
if (!message) {
showNotification('Please enter a message', 'error');
return;
}
if (useWebhook && targetType === 'dm') {
showNotification('Webhooks only work in channels, not DMs', 'error');
return;
}
let targetId, endpoint;
if (targetType === 'dm') {
targetId = document.getElementById('manualUserId').value.trim();
if (!targetId) {
showNotification('Please enter a user ID for DM', 'error');
return;
}
endpoint = `/dm/${targetId}/manual`;
} else {
targetId = document.getElementById('manualChannelId').value.trim();
if (!targetId) {
showNotification('Please enter a channel ID', 'error');
return;
}
endpoint = useWebhook ? '/manual/send-webhook' : '/manual/send';
}
try {
const formData = new FormData();
formData.append('message', message);
if (useWebhook) {
formData.append('persona', webhookPersona);
}
if (replyMessageId) {
formData.append('reply_to_message_id', replyMessageId);
formData.append('mention_author', replyMention);
}
if (targetType === 'dm') {
if (files.length > 0) {
for (let i = 0; i < files.length; i++) {
formData.append('files', files[i]);
}
}
} else {
formData.append('channel_id', targetId);
if (files.length > 0) {
for (let i = 0; i < files.length; i++) {
formData.append('files', files[i]);
}
}
}
const response = await fetch(endpoint, {
method: 'POST',
body: formData
});
const result = await response.json();
if (response.ok) {
showNotification('Message sent successfully');
document.getElementById('manualMessage').value = '';
document.getElementById('manualAttachment').value = '';
document.getElementById('manualReplyMessageId').value = '';
if (targetType === 'dm') {
document.getElementById('manualUserId').value = '';
} else {
document.getElementById('manualChannelId').value = '';
}
document.getElementById('manualStatus').textContent = '✅ Message sent successfully!';
document.getElementById('manualStatus').style.color = 'green';
} else {
throw new Error(result.message || 'Failed to send message');
}
} catch (error) {
console.error('Failed to send manual message:', error);
showNotification(error.message || 'Failed to send message', 'error');
document.getElementById('manualStatus').textContent = '❌ Failed to send message';
document.getElementById('manualStatus').style.color = 'red';
}
}
// ===== Custom Prompt =====
function toggleCustomPromptTarget() {
const targetType = document.getElementById('custom-prompt-target-type').value;
const serverSection = document.getElementById('custom-prompt-server-section');
const dmSection = document.getElementById('custom-prompt-dm-section');
if (targetType === 'dm') {
serverSection.style.display = 'none';
dmSection.style.display = 'inline';
} else {
serverSection.style.display = 'inline';
dmSection.style.display = 'none';
}
}
function toggleWebhookOptions() {
const useWebhook = document.getElementById('manual-use-webhook').checked;
const webhookOptions = document.getElementById('webhook-persona-options');
const targetType = document.getElementById('manual-target-type');
if (useWebhook) {
webhookOptions.style.display = 'block';
if (targetType.value === 'dm') {
targetType.value = 'channel';
toggleManualMessageTarget();
}
targetType.options[1].disabled = true;
} else {
webhookOptions.style.display = 'none';
targetType.options[1].disabled = false;
}
}
function toggleManualMessageTarget() {
const targetType = document.getElementById('manual-target-type').value;
const channelSection = document.getElementById('manual-channel-section');
const dmSection = document.getElementById('manual-dm-section');
if (targetType === 'dm') {
channelSection.style.display = 'none';
dmSection.style.display = 'block';
} else {
channelSection.style.display = 'block';
dmSection.style.display = 'none';
}
}
async function sendCustomPrompt() {
const prompt = document.getElementById('customPrompt').value.trim();
const targetType = document.getElementById('custom-prompt-target-type').value;
const files = document.getElementById('customPromptAttachment').files;
if (!prompt) {
showNotification('Please enter a custom prompt', 'error');
return;
}
try {
let endpoint;
if (targetType === 'dm') {
const userId = document.getElementById('custom-prompt-user-id').value.trim();
if (!userId) {
showNotification('Please enter a user ID for DM', 'error');
return;
}
endpoint = `/dm/${userId}/custom`;
} else {
const selectedServer = document.getElementById('custom-prompt-server-select').value;
endpoint = '/autonomous/custom';
if (selectedServer !== 'all') {
endpoint += `?guild_id=${selectedServer}`;
}
}
const result = await apiCall(endpoint, 'POST', { prompt: prompt });
showNotification(result.message || 'Custom prompt sent successfully');
document.getElementById('customPrompt').value = '';
document.getElementById('customPromptAttachment').value = '';
if (targetType === 'dm') {
document.getElementById('custom-prompt-user-id').value = '';
}
document.getElementById('customStatus').textContent = '✅ Custom prompt sent successfully!';
document.getElementById('customStatus').style.color = 'green';
} catch (error) {
console.error('Failed to send custom prompt:', error);
document.getElementById('customStatus').textContent = '❌ Failed to send custom prompt';
document.getElementById('customStatus').style.color = 'red';
}
}
function toggleCustomPrompt() {
const customPromptSection = document.getElementById('custom-prompt-section');
if (customPromptSection) {
customPromptSection.style.display = customPromptSection.style.display === 'none' ? 'block' : 'none';
}
}
// ===== Add Reaction =====
async function addReactionToMessage() {
const messageId = document.getElementById('reactionMessageId').value.trim();
const channelId = document.getElementById('reactionChannelId').value.trim();
const emoji = document.getElementById('reactionEmoji').value.trim();
const statusElement = document.getElementById('reactionStatus');
if (!messageId) {
showNotification('Please enter a message ID', 'error');
statusElement.textContent = '❌ Message ID is required';
statusElement.style.color = 'red';
return;
}
if (!channelId) {
showNotification('Please enter a channel ID', 'error');
statusElement.textContent = '❌ Channel ID is required';
statusElement.style.color = 'red';
return;
}
if (!emoji) {
showNotification('Please enter an emoji', 'error');
statusElement.textContent = '❌ Emoji is required';
statusElement.style.color = 'red';
return;
}
try {
statusElement.textContent = '⏳ Adding reaction...';
statusElement.style.color = '#61dafb';
const formData = new FormData();
formData.append('message_id', messageId);
formData.append('channel_id', channelId);
formData.append('emoji', emoji);
const response = await fetch('/messages/react', {
method: 'POST',
body: formData
});
const result = await response.json();
if (response.ok && result.status === 'ok') {
showNotification(`Reaction ${emoji} added successfully`);
statusElement.textContent = `✅ Reaction ${emoji} added successfully!`;
statusElement.style.color = 'green';
document.getElementById('reactionMessageId').value = '';
document.getElementById('reactionChannelId').value = '';
document.getElementById('reactionEmoji').value = '';
} else {
throw new Error(result.message || 'Failed to add reaction');
}
} catch (error) {
console.error('Failed to add reaction:', error);
showNotification(error.message || 'Failed to add reaction', 'error');
statusElement.textContent = `${error.message || 'Failed to add reaction'}`;
statusElement.style.color = 'red';
}
}

498
bot/static/js/chat.js Normal file
View File

@@ -0,0 +1,498 @@
// ============================================================================
// Miku Control Panel — Chat Interface + Voice Call Module
// ============================================================================
// Toggle image upload section based on model type
function toggleChatImageUpload() {
const modelType = document.querySelector('input[name="chat-model-type"]:checked').value;
const imageUploadSection = document.getElementById('chat-image-upload-section');
if (modelType === 'vision') {
imageUploadSection.style.display = 'block';
} else {
imageUploadSection.style.display = 'none';
}
}
// Load voice debug mode setting from server
async function loadVoiceDebugMode() {
try {
const data = await apiCall('/voice/debug-mode');
const checkbox = document.getElementById('voice-debug-mode');
if (checkbox && data.debug_mode !== undefined) {
checkbox.checked = data.debug_mode;
}
} catch (error) {
console.error('Failed to load voice debug mode:', error);
}
}
// Handle Enter key in chat input
function handleChatKeyPress(event) {
if (event.ctrlKey && event.key === 'Enter') {
event.preventDefault();
sendChatMessage();
}
}
// Clear chat history
function clearChatHistory() {
if (confirm('Are you sure you want to clear all chat messages?')) {
const chatMessages = document.getElementById('chat-messages');
chatMessages.innerHTML = `
<div style="text-align: center; color: #888; padding: 2rem;">
💬 Start chatting with the LLM! Your conversation will appear here.
</div>
`;
// Clear conversation history array
chatConversationHistory = [];
showNotification('Chat history cleared');
}
}
// Add a message to the chat display
function addChatMessage(sender, content, isError = false) {
const chatMessages = document.getElementById('chat-messages');
// Remove welcome message if it exists
const welcomeMsg = chatMessages.querySelector('div[style*="text-align: center"]');
if (welcomeMsg) {
welcomeMsg.remove();
}
const messageDiv = document.createElement('div');
const messageClass = isError ? 'error-message' : (sender === 'You' ? 'user-message' : 'assistant-message');
messageDiv.className = `chat-message ${messageClass}`;
const timestamp = new Date().toLocaleTimeString();
messageDiv.innerHTML = `
<div class="chat-message-header">
<span class="chat-message-sender">${escapeHtml(sender)}</span>
<span class="chat-message-time">${timestamp}</span>
</div>
<div class="chat-message-content"></div>
`;
// Set content via textContent to prevent XSS
messageDiv.querySelector('.chat-message-content').textContent = content;
chatMessages.appendChild(messageDiv);
// Scroll to bottom
chatMessages.scrollTop = chatMessages.scrollHeight;
return messageDiv;
}
// Add typing indicator
function showTypingIndicator() {
const chatMessages = document.getElementById('chat-messages');
const typingDiv = document.createElement('div');
typingDiv.id = 'chat-typing-indicator';
typingDiv.className = 'chat-message assistant-message';
typingDiv.innerHTML = `
<div class="chat-message-header">
<span class="chat-message-sender">Miku</span>
<span class="chat-message-time">typing...</span>
</div>
<div class="chat-typing-indicator">
<span></span>
<span></span>
<span></span>
</div>
`;
chatMessages.appendChild(typingDiv);
chatMessages.scrollTop = chatMessages.scrollHeight;
}
// Remove typing indicator
function hideTypingIndicator() {
const typingIndicator = document.getElementById('chat-typing-indicator');
if (typingIndicator) {
typingIndicator.remove();
}
}
// Send chat message with streaming support
async function sendChatMessage() {
const input = document.getElementById('chat-input');
const message = input.value.trim();
if (!message) {
showNotification('Please enter a message', 'error');
return;
}
// Get configuration
const modelType = document.querySelector('input[name="chat-model-type"]:checked').value;
const useSystemPrompt = document.querySelector('input[name="chat-system-prompt"]:checked').value === 'true';
const selectedMood = document.getElementById('chat-mood-select').value;
// Get image data if vision model
let imageData = null;
if (modelType === 'vision') {
const imageFile = document.getElementById('chat-image-file').files[0];
if (imageFile) {
try {
imageData = await readFileAsBase64(imageFile);
// Remove data URL prefix if present
if (imageData.includes(',')) {
imageData = imageData.split(',')[1];
}
} catch (error) {
showNotification('Failed to read image file', 'error');
return;
}
}
}
// Disable send button
const sendBtn = document.getElementById('chat-send-btn');
const originalBtnText = sendBtn.innerHTML;
sendBtn.disabled = true;
sendBtn.innerHTML = '⏳ Sending...';
// Add user message to display
addChatMessage('You', message);
// Clear input
input.value = '';
// Show typing indicator
showTypingIndicator();
try {
// Build user message for history
let userMessageContent;
if (modelType === 'vision' && imageData) {
// Vision model with image - store as multimodal content
userMessageContent = [
{
"type": "text",
"text": message
},
{
"type": "image_url",
"image_url": {
"url": `data:image/jpeg;base64,${imageData}`
}
}
];
} else {
// Text-only message
userMessageContent = message;
}
// Prepare request payload with conversation history
const payload = {
message: message,
model_type: modelType,
use_system_prompt: useSystemPrompt,
image_data: imageData,
conversation_history: chatConversationHistory,
mood: selectedMood
};
// Make streaming request
const response = await fetch('/chat/stream', {
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
// Hide typing indicator
hideTypingIndicator();
// Create message element for streaming response
const assistantName = useSystemPrompt ? 'Miku' : 'LLM';
const responseDiv = addChatMessage(assistantName, '');
const contentDiv = responseDiv.querySelector('.chat-message-content');
// Read stream
const reader = response.body.getReader();
const decoder = new TextDecoder();
let buffer = '';
let fullResponse = '';
while (true) {
const { done, value } = await reader.read();
if (done) break;
buffer += decoder.decode(value, { stream: true });
// Process complete SSE messages
const lines = buffer.split('\n');
buffer = lines.pop() || ''; // Keep incomplete line in buffer
for (const line of lines) {
if (line.startsWith('data: ')) {
const dataStr = line.slice(6);
try {
const data = JSON.parse(dataStr);
if (data.error) {
contentDiv.textContent = `❌ Error: ${data.error}`;
responseDiv.classList.add('error-message');
break;
}
if (data.content) {
fullResponse += data.content;
contentDiv.textContent = fullResponse;
// Auto-scroll
const chatMessages = document.getElementById('chat-messages');
chatMessages.scrollTop = chatMessages.scrollHeight;
}
if (data.done) {
break;
}
} catch (e) {
console.error('Failed to parse SSE data:', e);
}
}
}
}
// If no response was received, show error
if (!fullResponse) {
contentDiv.textContent = '❌ No response received from LLM';
responseDiv.classList.add('error-message');
} else {
// Add user message to conversation history
chatConversationHistory.push({
role: "user",
content: userMessageContent
});
// Add assistant response to conversation history
chatConversationHistory.push({
role: "assistant",
content: fullResponse
});
console.log('💬 Conversation history updated:', chatConversationHistory.length, 'messages');
}
} catch (error) {
console.error('Chat error:', error);
hideTypingIndicator();
addChatMessage('Error', `Failed to send message: ${error.message}`, true);
showNotification('Failed to send message', 'error');
} finally {
// Re-enable send button
sendBtn.disabled = false;
sendBtn.innerHTML = originalBtnText;
}
}
// Helper function to read file as base64
function readFileAsBase64(file) {
return new Promise((resolve, reject) => {
const reader = new FileReader();
reader.onload = () => resolve(reader.result);
reader.onerror = reject;
reader.readAsDataURL(file);
});
}
// ============================================================================
// Voice Call Management Functions
// ============================================================================
async function initiateVoiceCall() {
const userId = document.getElementById('voice-user-id').value.trim();
const channelId = document.getElementById('voice-channel-id').value.trim();
const debugMode = document.getElementById('voice-debug-mode').checked;
// Validation
if (!userId) {
showNotification('Please enter a user ID', 'error');
return;
}
if (!channelId) {
showNotification('Please enter a voice channel ID', 'error');
return;
}
// Check if user IDs are valid (numeric)
if (isNaN(userId) || isNaN(channelId)) {
showNotification('User ID and Channel ID must be numeric', 'error');
return;
}
// Set debug mode
try {
const debugFormData = new FormData();
debugFormData.append('enabled', debugMode);
await fetch('/voice/debug-mode', {
method: 'POST',
body: debugFormData
});
} catch (error) {
console.error('Failed to set debug mode:', error);
}
// Disable button and show status
const callBtn = document.getElementById('voice-call-btn');
const cancelBtn = document.getElementById('voice-call-cancel-btn');
const statusDiv = document.getElementById('voice-call-status');
const statusText = document.getElementById('voice-call-status-text');
callBtn.disabled = true;
statusDiv.style.display = 'block';
cancelBtn.style.display = 'inline-block';
voiceCallActive = true;
try {
statusText.innerHTML = '⏳ Starting STT and TTS containers...';
const formData = new FormData();
formData.append('user_id', userId);
formData.append('voice_channel_id', channelId);
const response = await fetch('/voice/call', {
method: 'POST',
body: formData
});
const data = await response.json();
// Check for HTTP error status (422 validation error, etc.)
if (!response.ok) {
let errorMsg = data.error || data.detail || 'Unknown error';
// Handle FastAPI validation errors
if (data.detail && Array.isArray(data.detail)) {
errorMsg = data.detail.map(e => `${e.loc.join('.')}: ${e.msg}`).join(', ');
}
statusText.innerHTML = `❌ Error: ${errorMsg}`;
showNotification(`Voice call failed: ${errorMsg}`, 'error');
callBtn.disabled = false;
cancelBtn.style.display = 'none';
voiceCallActive = false;
return;
}
if (!data.success) {
statusText.innerHTML = `❌ Error: ${data.error}`;
showNotification(`Voice call failed: ${data.error}`, 'error');
callBtn.disabled = false;
cancelBtn.style.display = 'none';
voiceCallActive = false;
return;
}
// Success!
statusText.innerHTML = `✅ Voice call initiated!<br>User ID: ${data.user_id}<br>Channel: ${data.channel_id}`;
// Show invite link
const inviteDiv = document.getElementById('voice-call-invite-link');
const inviteUrl = document.getElementById('voice-call-invite-url');
inviteUrl.href = data.invite_url;
inviteUrl.textContent = data.invite_url;
inviteDiv.style.display = 'block';
// Add to call history
addVoiceCallToHistory(userId, channelId, data.invite_url);
showNotification('Voice call initiated successfully!', 'success');
// Auto-reset after 5 minutes (call should be done by then or timed out)
setTimeout(() => {
if (voiceCallActive) {
resetVoiceCall();
}
}, 300000); // 5 minutes
} catch (error) {
console.error('Voice call error:', error);
statusText.innerHTML = `❌ Error: ${error.message}`;
showNotification(`Voice call error: ${error.message}`, 'error');
callBtn.disabled = false;
cancelBtn.style.display = 'none';
voiceCallActive = false;
}
}
function cancelVoiceCall() {
resetVoiceCall();
showNotification('Voice call cancelled', 'info');
}
function resetVoiceCall() {
const callBtn = document.getElementById('voice-call-btn');
const cancelBtn = document.getElementById('voice-call-cancel-btn');
const statusDiv = document.getElementById('voice-call-status');
callBtn.disabled = false;
cancelBtn.style.display = 'none';
statusDiv.style.display = 'none';
voiceCallActive = false;
// Clear inputs
document.getElementById('voice-user-id').value = '';
document.getElementById('voice-channel-id').value = '';
}
function addVoiceCallToHistory(userId, channelId, inviteUrl) {
const now = new Date();
const timestamp = now.toLocaleTimeString();
const callEntry = {
userId: userId,
channelId: channelId,
inviteUrl: inviteUrl,
timestamp: timestamp
};
voiceCallHistory.unshift(callEntry); // Add to front
// Keep only last 10 calls
if (voiceCallHistory.length > 10) {
voiceCallHistory.pop();
}
updateVoiceCallHistoryDisplay();
}
function updateVoiceCallHistoryDisplay() {
const historyDiv = document.getElementById('voice-call-history');
if (voiceCallHistory.length === 0) {
historyDiv.innerHTML = '<div style="text-align: center; color: #888;">No calls yet. Start one above!</div>';
return;
}
let html = '';
voiceCallHistory.forEach((call, index) => {
html += `
<div style="background: #242424; padding: 0.75rem; margin-bottom: 0.5rem; border-radius: 4px; border-left: 3px solid #61dafb;">
<div style="display: flex; justify-content: space-between; align-items: center;">
<div>
<strong>${call.timestamp}</strong>
<div style="font-size: 0.85rem; color: #aaa; margin-top: 0.3rem;">
User: <code>${call.userId}</code> | Channel: <code>${call.channelId}</code>
</div>
</div>
<a href="${call.inviteUrl}" target="_blank" style="color: #61dafb; text-decoration: none; padding: 0.3rem 0.7rem; background: #333; border-radius: 4px; font-size: 0.85rem;">
View Link →
</a>
</div>
</div>
`;
});
historyDiv.innerHTML = html;
}

419
bot/static/js/core.js Normal file
View File

@@ -0,0 +1,419 @@
// ============================================================================
// Miku Control Panel — Core Module
// Global variables, utility functions, tab switching, initialization, polling
// ============================================================================
// Global variables
let currentMood = 'neutral';
let voiceCallActive = false;
let voiceCallHistory = [];
let servers = [];
let evilMode = false;
let bipolarMode = false;
let selectedGPU = 'nvidia';
let chatConversationHistory = [];
let pfpCropper = null;
let albumEntries = [];
let albumSelectedId = null;
let albumChecked = new Set();
let albumCropper = null;
let albumOpen = false;
let activitiesData = null;
let activitiesOpen = false;
let activitiesSections = { normal: false, evil: false };
let activitiesEditing = {};
let activitiesEditCache = {};
let currentEditMemory = null;
let logsAutoScroll = true;
let notificationTimer = null;
let statusInterval = null;
let logsInterval = null;
let argsInterval = null;
let promptInterval = null;
// Mood emoji mapping
const MOOD_EMOJIS = {
"asleep": "💤",
"neutral": "",
"bubbly": "🫧",
"sleepy": "🌙",
"curious": "👀",
"shy": "👉👈",
"serious": "👔",
"excited": "✨",
"melancholy": "🍷",
"flirty": "🫦",
"romantic": "💌",
"irritated": "😒",
"angry": "💢",
"silly": "🪿"
};
// Evil mood emoji mapping
const EVIL_MOOD_EMOJIS = {
"aggressive": "👿",
"cunning": "🐍",
"sarcastic": "😈",
"evil_neutral": "",
"bored": "🥱",
"manic": "🤪",
"jealous": "💚",
"melancholic": "🌑",
"playful_cruel": "🎭",
"contemptuous": "👑"
};
// ============================================================================
// Utility functions
// ============================================================================
function showNotification(message, type = 'info') {
const notification = document.getElementById('notification');
notification.textContent = message;
notification.style.display = 'block';
notification.style.opacity = '0.95';
if (type === 'error') {
notification.style.backgroundColor = '#d32f2f';
} else if (type === 'success') {
notification.style.backgroundColor = '#2e7d32';
} else {
notification.style.backgroundColor = '#222';
}
if (notificationTimer) clearTimeout(notificationTimer);
notificationTimer = setTimeout(() => {
notification.style.opacity = '0';
setTimeout(() => {
notification.style.display = 'none';
notificationTimer = null;
}, 300);
}, 3000);
}
async function apiCall(endpoint, method = 'GET', data = null) {
try {
const options = {
method: method,
headers: {
'Content-Type': 'application/json',
}
};
if (data) {
options.body = JSON.stringify(data);
}
const response = await fetch(endpoint, options);
const result = await response.json();
if (response.ok) {
return result;
} else {
throw new Error(result.message || 'API call failed');
}
} catch (error) {
console.error('API call error:', error);
showNotification(error.message, 'error');
throw error;
}
}
function escapeHtml(text) {
if (!text) return '';
const div = document.createElement('div');
div.textContent = text;
return div.innerHTML;
}
function escapeJsonForAttribute(obj) {
return JSON.stringify(obj)
.replace(/&/g, '&amp;')
.replace(/'/g, '&apos;')
.replace(/"/g, '&quot;')
.replace(/</g, '&lt;')
.replace(/>/g, '&gt;');
}
// ============================================================================
// Tab switching
// ============================================================================
function switchTab(tabId) {
document.querySelectorAll('.tab-content').forEach(tab => {
tab.classList.remove('active');
});
document.querySelectorAll('.tab-button').forEach(button => {
button.classList.remove('active');
});
document.getElementById(tabId).classList.add('active');
const activeBtn = document.querySelector(`.tab-button[data-tab="${tabId}"]`);
if (activeBtn) activeBtn.classList.add('active');
localStorage.setItem('miku-active-tab', tabId);
console.log(`🔄 Switched to ${tabId}`);
if (tabId === 'tab1') {
console.log('🔄 Refreshing figurine subscribers for Server Management tab');
refreshFigurineSubscribers();
}
if (tabId === 'tab3') {
loadStatus();
loadLastPrompt();
}
if (tabId === 'tab6') {
showTabLoading('tab6');
loadAutonomousStats().finally(() => hideTabLoading('tab6'));
}
if (tabId === 'tab9') {
console.log('🧠 Refreshing memory stats for Memories tab');
showTabLoading('tab9');
refreshMemoryStats().finally(() => hideTabLoading('tab9'));
}
if (tabId === 'tab10') {
console.log('📱 Loading DM users for DM Management tab');
showTabLoading('tab10');
loadDMUsers().finally(() => hideTabLoading('tab10'));
}
if (tabId === 'tab11') {
console.log('🖼️ Loading Profile Picture tab');
loadPfpTab();
}
}
function showTabLoading(tabId) {
const tab = document.getElementById(tabId);
if (!tab) return;
if (tab.querySelector('.tab-loading-overlay')) return;
const sections = tab.querySelectorAll('.section');
const hasContent = Array.from(sections).some(s => s.querySelector('[id]')?.innerHTML?.trim());
if (hasContent) return;
const overlay = document.createElement('div');
overlay.className = 'tab-loading-overlay';
overlay.innerHTML = '<div class="spinner"></div> Loading...';
tab.prepend(overlay);
}
function hideTabLoading(tabId) {
const tab = document.getElementById(tabId);
if (!tab) return;
const overlay = tab.querySelector('.tab-loading-overlay');
if (overlay) overlay.remove();
}
// ============================================================================
// Polling
// ============================================================================
function startPolling() {
if (!statusInterval) statusInterval = setInterval(loadStatus, 10000);
if (!logsInterval) logsInterval = setInterval(loadLogs, 5000);
if (!argsInterval) argsInterval = setInterval(loadActiveArguments, 5000);
if (!promptInterval) promptInterval = setInterval(loadPromptHistory, 10000);
}
function stopPolling() {
clearInterval(statusInterval); statusInterval = null;
clearInterval(logsInterval); logsInterval = null;
clearInterval(argsInterval); argsInterval = null;
clearInterval(promptInterval); promptInterval = null;
}
// ============================================================================
// Initialization helpers
// ============================================================================
function initTabState() {
const savedTab = localStorage.getItem('miku-active-tab');
if (savedTab && document.getElementById(savedTab)) {
switchTab(savedTab);
}
}
function initTabWheelScroll() {
const tabButtonsEl = document.querySelector('.tab-buttons');
if (tabButtonsEl) {
tabButtonsEl.addEventListener('wheel', function(e) {
if (e.deltaY !== 0) {
e.preventDefault();
tabButtonsEl.scrollLeft += e.deltaY;
}
}, { passive: false });
}
}
function initVisibilityPolling() {
document.addEventListener('visibilitychange', () => {
if (document.hidden) {
stopPolling();
console.log('⏸ Tab hidden — polling paused');
} else {
loadStatus(); loadLogs(); loadActiveArguments(); loadPromptHistory();
startPolling();
console.log('▶️ Tab visible — polling resumed');
}
});
}
function initChatImagePreview() {
const imageInput = document.getElementById('chat-image-file');
if (imageInput) {
imageInput.addEventListener('change', function(e) {
const file = e.target.files[0];
if (file) {
const reader = new FileReader();
reader.onload = function(event) {
const preview = document.getElementById('chat-image-preview');
const previewImg = document.getElementById('chat-image-preview-img');
previewImg.src = event.target.result;
preview.style.display = 'block';
};
reader.readAsDataURL(file);
}
});
}
}
function initModalAccessibility() {
const editModal = document.getElementById('edit-memory-modal');
const createModal = document.getElementById('create-memory-modal');
if (editModal) {
editModal.setAttribute('role', 'dialog');
editModal.setAttribute('aria-modal', 'true');
editModal.setAttribute('aria-label', 'Edit Memory');
editModal.addEventListener('click', function(e) {
if (e.target === this) closeEditMemoryModal();
});
}
if (createModal) {
createModal.setAttribute('role', 'dialog');
createModal.setAttribute('aria-modal', 'true');
createModal.setAttribute('aria-label', 'Create Memory');
createModal.addEventListener('click', function(e) {
if (e.target === this) closeCreateMemoryModal();
});
}
}
function initPromptSourceToggle() {
const saved = localStorage.getItem('miku-prompt-source') || 'all';
document.querySelectorAll('.prompt-source-btn').forEach(btn => btn.classList.remove('active'));
const btnId = saved === 'all' ? 'prompt-src-all' : `prompt-src-${saved}`;
const btn = document.getElementById(btnId);
if (btn) btn.classList.add('active');
}
function initLogsScrollDetection() {
const logsPanel = document.getElementById('logs-panel');
if (!logsPanel) return;
logsPanel.addEventListener('scroll', function() {
const atBottom = logsPanel.scrollHeight - logsPanel.scrollTop - logsPanel.clientHeight < 50;
logsAutoScroll = atBottom;
const banner = document.getElementById('logs-paused-banner');
if (banner) banner.style.display = atBottom ? 'none' : 'block';
});
}
function scrollLogsToBottom() {
const logsPanel = document.getElementById('logs-panel');
if (logsPanel) {
logsPanel.scrollTop = logsPanel.scrollHeight;
logsAutoScroll = true;
const banner = document.getElementById('logs-paused-banner');
if (banner) banner.style.display = 'none';
}
}
// ============================================================================
// Log functions
// ============================================================================
function classifyLogLine(line) {
const upper = line.toUpperCase();
if (upper.includes(' ERROR ') || upper.includes(' CRITICAL ') || upper.startsWith('ERROR') || upper.startsWith('CRITICAL') || upper.includes('TRACEBACK')) return 'log-error';
if (upper.includes(' WARNING ') || upper.startsWith('WARNING')) return 'log-warning';
if (upper.includes(' DEBUG ') || upper.startsWith('DEBUG')) return 'log-debug';
return 'log-info';
}
async function loadLogs() {
try {
const result = await apiCall('/logs');
const logsContent = document.getElementById('logs-content');
const lines = (result || '').split('\n');
logsContent.innerHTML = lines.map(line => {
if (!line.trim()) return '';
const cls = classifyLogLine(line);
return `<div class="log-line ${cls}">${escapeHtml(line)}</div>`;
}).join('');
if (logsAutoScroll) {
scrollLogsToBottom();
}
} catch (error) {
console.error('Failed to load logs:', error);
}
}
// ============================================================================
// Prompt source toggle (shared between core and status modules)
// ============================================================================
function switchPromptSource(source) {
localStorage.setItem('miku-prompt-source', source);
document.querySelectorAll('.prompt-source-btn').forEach(btn => btn.classList.remove('active'));
const btnId = source === 'all' ? 'prompt-src-all' : `prompt-src-${source}`;
const btn = document.getElementById(btnId);
if (btn) btn.classList.add('active');
loadPromptHistory();
}
// ============================================================================
// Profile picture metadata (stub — actual loading in profile.js)
// ============================================================================
async function loadProfilePictureMetadata() {
// Delegated to PFP tab loader — only runs if tab11 is active
}
// ============================================================================
// DOMContentLoaded — main initialization
// ============================================================================
document.addEventListener('DOMContentLoaded', function() {
initTabState();
initTabWheelScroll();
initLogsScrollDetection();
initChatImagePreview();
initModalAccessibility();
initPromptSourceToggle();
loadStatus();
loadServers();
populateMoodDropdowns();
loadLastPrompt();
loadLogs();
checkEvilModeStatus();
checkBipolarModeStatus();
checkGPUStatus();
refreshLanguageStatus();
refreshFigurineSubscribers();
loadProfilePictureMetadata();
loadVoiceDebugMode();
initVisibilityPolling();
startPolling();
// Modal keyboard close handler
document.addEventListener('keydown', function(e) {
if (e.key === 'Escape') {
const editModal = document.getElementById('edit-memory-modal');
const createModal = document.getElementById('create-memory-modal');
if (editModal && editModal.style.display !== 'none') closeEditMemoryModal();
if (createModal && createModal.style.display !== 'none') closeCreateMemoryModal();
}
});
});

548
bot/static/js/dm.js Normal file
View File

@@ -0,0 +1,548 @@
// ============================================================================
// Miku Control Panel — DM Management Module
// ============================================================================
async function loadDMUsers() {
try {
const result = await apiCall('/dms/users');
displayDMUsers(result.users);
} catch (error) {
console.error('Failed to load DM users:', error);
}
}
function displayDMUsers(users) {
const container = document.getElementById('dm-users-list');
if (!users || users.length === 0) {
container.innerHTML = '<p>No DM conversations found.</p>';
return;
}
let html = '<div class="dm-users-grid">';
users.forEach(user => {
console.log(`👤 Processing user: ${user.username} (ID: ${user.user_id})`);
const lastMessage = user.last_message ?
`Last: ${user.last_message.content}` :
'No messages yet';
const lastTime = user.last_message ?
new Date(user.last_message.timestamp).toLocaleString() :
'Never';
html += `
<div class="dm-user-card">
<h4>👤 ${user.username}</h4>
<p><strong>ID:</strong> ${user.user_id}</p>
<p><strong>Total Messages:</strong> ${user.total_messages}</p>
<p><strong>User Messages:</strong> ${user.user_messages}</p>
<p><strong>Bot Messages:</strong> ${user.bot_messages}</p>
<p><strong>Last Activity:</strong> ${lastTime}</p>
<p><strong>Last Message:</strong> ${lastMessage}</p>
<div class="dm-user-actions">
<button class="view-chat-btn" data-user-id="${user.user_id}">💬 View Chat</button>
<button class="analyze-user-btn" data-user-id="${user.user_id}" data-username="${user.username}" style="background: #9c27b0;">📊 Analyze</button>
<button class="export-dms-btn" data-user-id="${user.user_id}">📤 Export</button>
<button class="block-user-btn" data-user-id="${user.user_id}" data-username="${user.username}" style="background: #ff9800;">🚫 Block</button>
<button class="delete-all-dms-btn" data-user-id="${user.user_id}" data-username="${user.username}" style="background: #f44336;">🗑️ Delete All</button>
<button class="delete-user-completely-btn" data-user-id="${user.user_id}" data-username="${user.username}" style="background: #d32f2f;">💀 Delete User</button>
</div>
</div>
`;
});
html += '</div>';
container.innerHTML = html;
// Add event listeners after HTML is inserted
addDMUserEventListeners();
}
function addDMUserEventListeners() {
// Add event listeners for view chat buttons
document.querySelectorAll('.view-chat-btn').forEach(button => {
button.addEventListener('click', function() {
const userId = this.getAttribute('data-user-id');
console.log(`🎯 View chat clicked for user ID: ${userId} (type: ${typeof userId})`);
viewUserConversations(userId);
});
});
// Add event listeners for export buttons
document.querySelectorAll('.export-dms-btn').forEach(button => {
button.addEventListener('click', function() {
const userId = this.getAttribute('data-user-id');
console.log(`🎯 Export clicked for user ID: ${userId} (type: ${typeof userId})`);
exportUserDMs(userId);
});
});
// Add event listeners for analyze buttons
document.querySelectorAll('.analyze-user-btn').forEach(button => {
button.addEventListener('click', function() {
const userId = this.getAttribute('data-user-id');
const username = this.getAttribute('data-username');
console.log(`🎯 Analyze clicked for user ID: ${userId} (type: ${typeof userId})`);
analyzeUserInteraction(userId, username);
});
});
// Add event listeners for block buttons
document.querySelectorAll('.block-user-btn').forEach(button => {
button.addEventListener('click', function() {
const userId = this.getAttribute('data-user-id');
const username = this.getAttribute('data-username');
console.log(`🎯 Block clicked for user ID: ${userId} (type: ${typeof userId})`);
blockUser(userId, username);
});
});
// Add event listeners for delete all DMs buttons
document.querySelectorAll('.delete-all-dms-btn').forEach(button => {
button.addEventListener('click', function() {
const userId = this.getAttribute('data-user-id');
const username = this.getAttribute('data-username');
console.log(`🎯 Delete all DMs clicked for user ID: ${userId} (type: ${typeof userId})`);
deleteAllUserConversations(userId, username);
});
});
// Add event listeners for delete user completely buttons
document.querySelectorAll('.delete-user-completely-btn').forEach(button => {
button.addEventListener('click', function() {
const userId = this.getAttribute('data-user-id');
const username = this.getAttribute('data-username');
console.log(`🎯 Delete user completely clicked for user ID: ${userId} (type: ${typeof userId})`);
deleteUserCompletely(userId, username);
});
});
}
async function viewUserConversations(userId) {
try {
// Ensure userId is always treated as a string
const userIdStr = String(userId);
console.log(`🔍 Loading conversations for user ${userIdStr} (type: ${typeof userIdStr})`);
console.log(`🔍 Original userId: ${userId} (type: ${typeof userId})`);
console.log(`🔍 userIdStr: ${userIdStr} (type: ${typeof userIdStr})`);
const result = await apiCall(`/dms/users/${userIdStr}/conversations?limit=100`);
console.log('📡 API Response:', result);
console.log('📡 API URL called:', `/dms/users/${userIdStr}/conversations?limit=100`);
if (result.conversations && result.conversations.length > 0) {
console.log(`✅ Found ${result.conversations.length} conversations`);
displayUserConversations(userIdStr, result.conversations);
} else {
console.log('⚠️ No conversations found in response');
showNotification('No conversations found for this user', 'info');
// Go back to user list
loadDMUsers();
}
} catch (error) {
console.error('Failed to load user conversations:', error);
}
}
function displayUserConversations(userId, conversations) {
console.log(`🎨 Displaying conversations for user ${userId}:`, conversations);
// Create a modal or expand the user card to show conversations
const container = document.getElementById('dm-users-list');
let html = `
<div class="conversation-view">
<button onclick="loadDMUsers()" style="margin-bottom: 1rem;">← Back to DM Users</button>
<h4>💬 Conversations with User ${userId}</h4>
<div class="conversations-list">
`;
if (!conversations || conversations.length === 0) {
html += '<p>No conversations found for this user.</p>';
} else {
conversations.forEach((msg, index) => {
console.log(`📝 Processing message ${index}:`, msg);
const timestamp = new Date(msg.timestamp).toLocaleString();
const sender = msg.is_bot_message ? '🤖 Miku' : '👤 User';
const content = msg.content || '[No text content]';
const messageId = msg.message_id || msg.timestamp; // Use message_id or timestamp as identifier
const escapedContent = content.replace(/'/g, "\\'").replace(/"/g, '\\"');
// Debug: Log message details
console.log(`📝 Message ${index}: id=${messageId}, is_bot=${msg.is_bot_message}, content="${content.substring(0, 30)}..."`);
// Only show delete button for bot messages (Miku can only delete her own messages)
const deleteButton = msg.is_bot_message ?
`<button class="delete-message-btn" onclick="deleteConversation('${userId}', '${messageId}', '${escapedContent}')"
style="background: #f44336; color: white; border: none; padding: 2px 6px; font-size: 12px; border-radius: 3px; margin-left: 10px;"
title="Delete this Miku message (ID: ${messageId})">
🗑️ Delete
</button>` : '';
html += `
<div class="conversation-message ${msg.is_bot_message ? 'bot-message' : 'user-message'}">
<div class="message-header">
<span class="sender">${sender}</span>
<span class="timestamp">${timestamp}</span>
${deleteButton}
</div>
<div class="message-content">${content}</div>
${msg.attachments && msg.attachments.length > 0 ? `
<div class="message-attachments">
<strong>📎 Attachments:</strong>
${msg.attachments.map(att => `
<div class="attachment">
- ${att.filename} (${att.size} bytes)
<a href="${att.url}" target="_blank">🔗 View</a>
</div>
`).join('')}
</div>
` : ''}
${msg.reactions && msg.reactions.length > 0 ? `
<div class="message-reactions">
${msg.reactions.map(reaction => {
const reactionTime = new Date(reaction.added_at).toLocaleString();
const reactorType = reaction.is_bot ? 'bot-reaction' : 'user-reaction';
const reactorLabel = reaction.is_bot ? '🤖 Miku' : `👤 ${reaction.reactor_name}`;
return `
<div class="reaction-item" title="${reactorLabel} reacted at ${reactionTime}">
<span class="reaction-emoji">${reaction.emoji}</span>
<span class="reaction-by ${reactorType}">${reactorLabel}</span>
</div>
`;
}).join('')}
</div>
` : ''}
</div>
`;
});
}
html += `
</div>
</div>
`;
console.log('🎨 Generated HTML:', html);
container.innerHTML = html;
}
async function exportUserDMs(userId) {
try {
// Ensure userId is always treated as a string
const userIdStr = String(userId);
await apiCall(`/dms/users/${userIdStr}/export?format=txt`);
showNotification(`DM export completed for user ${userIdStr}`);
// You could trigger a download here if the file is accessible
} catch (error) {
console.error('Failed to export user DMs:', error);
}
}
async function deleteUserDMs(userId) {
// Ensure userId is always treated as a string
const userIdStr = String(userId);
if (!confirm(`Are you sure you want to delete all DM logs for user ${userIdStr}? This action cannot be undone.`)) {
return;
}
try {
await apiCall(`/dms/users/${userIdStr}`, 'DELETE');
showNotification(`Deleted DM logs for user ${userIdStr}`);
loadDMUsers(); // Refresh the list
} catch (error) {
console.error('Failed to delete user DMs:', error);
}
}
// ========== User Blocking & Advanced Deletion Functions ==========
async function blockUser(userId, username) {
const userIdStr = String(userId);
if (!confirm(`Are you sure you want to block ${username} (${userIdStr}) from sending DMs to Miku?`)) {
return;
}
try {
await apiCall(`/dms/users/${userIdStr}/block`, 'POST');
showNotification(`${username} has been blocked from sending DMs`);
loadDMUsers(); // Refresh the list
} catch (error) {
console.error('Failed to block user:', error);
}
}
async function unblockUser(userId, username) {
const userIdStr = String(userId);
try {
await apiCall(`/dms/users/${userIdStr}/unblock`, 'POST');
showNotification(`${username} has been unblocked`);
loadBlockedUsers(); // Refresh blocked users list
} catch (error) {
console.error('Failed to unblock user:', error);
}
}
async function deleteAllUserConversations(userId, username) {
const userIdStr = String(userId);
if (!confirm(`⚠️ DELETE ALL CONVERSATIONS with ${username} (${userIdStr})?\n\nThis will:\n• Delete ALL Miku messages from Discord DM\n• Clear all conversation logs\n• Keep the user record\n\nThis action CANNOT be undone!\n\nClick OK to confirm deletion.`)) {
return;
}
try {
await apiCall(`/dms/users/${userIdStr}/conversations/delete-all`, 'POST');
showNotification(`Bulk deletion queued for ${username} (deleting all Miku messages from Discord and logs)`);
setTimeout(() => {
loadDMUsers(); // Refresh after a delay to allow deletion to process
}, 2000);
} catch (error) {
console.error('Failed to delete conversations:', error);
}
}
async function deleteUserCompletely(userId, username) {
const userIdStr = String(userId);
if (!confirm(`🚨 COMPLETELY DELETE USER ${username} (${userIdStr})?\n\nThis will:\n• Delete ALL conversation history\n• Delete the entire user log file\n• Remove ALL traces of this user\n\nThis action is PERMANENT and CANNOT be undone!\n\nType "${username}" below to confirm:`)) {
return;
}
const confirmName = prompt(`Type the username "${username}" to confirm complete deletion:`);
if (confirmName !== username) {
showNotification('Deletion cancelled - username did not match', 'error');
return;
}
try {
await apiCall(`/dms/users/${userIdStr}/delete-completely`, 'POST');
showNotification(`${username} has been completely deleted from the system`);
loadDMUsers(); // Refresh the list
} catch (error) {
console.error('Failed to delete user completely:', error);
}
}
async function deleteConversation(userId, conversationId, messageContent) {
const userIdStr = String(userId);
if (!confirm(`Delete this Miku message from Discord and logs?\n\n"${messageContent.substring(0, 100)}${messageContent.length > 100 ? '...' : ''}"\n\nThis will:\n• Delete the message from Discord DM\n• Remove it from conversation logs\n\nNote: Only Miku's messages can be deleted.\nThis action cannot be undone.`)) {
return;
}
try {
await apiCall(`/dms/users/${userIdStr}/conversations/${conversationId}/delete`, 'POST');
showNotification('Miku message deletion queued (deleting from both Discord and logs)');
setTimeout(() => {
viewUserConversations(userId); // Refresh after a short delay to allow deletion to process
}, 1000);
} catch (error) {
console.error('Failed to delete conversation:', error);
}
}
async function analyzeUserInteraction(userId, username) {
const userIdStr = String(userId);
if (!confirm(`Run DM interaction analysis for ${username}?\n\nThis will:\n• Analyze their messages from the last 24 hours\n• Generate a sentiment report\n• Send report to bot owner\n\nMinimum 3 messages required for analysis.`)) {
return;
}
try {
showNotification(`Analyzing ${username}'s interactions...`, 'info');
const result = await apiCall(`/dms/users/${userIdStr}/analyze`, 'POST');
if (result.reported) {
showNotification(`✅ Analysis complete! Report sent to bot owner for ${username}`);
} else {
showNotification(`📊 Analysis complete for ${username} (not enough messages or already reported today)`);
}
} catch (error) {
console.error('Failed to analyze user:', error);
}
}
async function runDailyAnalysis() {
if (!confirm('Run the daily DM interaction analysis now?\n\nThis will:\n• Analyze all DM users from the last 24 hours\n• Report one significant interaction to the bot owner\n• Skip users already reported today\n\nNote: This runs automatically at 2 AM daily.')) {
return;
}
try {
showNotification('Starting DM interaction analysis...', 'info');
await apiCall('/dms/analysis/run', 'POST');
showNotification('✅ DM analysis completed! Check bot owner\'s DMs for any reports.');
} catch (error) {
console.error('Failed to run DM analysis:', error);
}
}
async function viewAnalysisReports() {
try {
showNotification('Loading analysis reports...', 'info');
const result = await apiCall('/dms/analysis/reports?limit=50');
displayAnalysisReports(result.reports);
} catch (error) {
console.error('Failed to load reports:', error);
}
}
function displayAnalysisReports(reports) {
const container = document.getElementById('dm-users-list');
if (!reports || reports.length === 0) {
container.innerHTML = `
<div style="text-align: center; padding: 2rem;">
<p>No analysis reports found yet.</p>
<button onclick="loadDMUsers()" style="margin-top: 1rem;">← Back to DM Users</button>
</div>
`;
return;
}
let html = `
<div style="margin-bottom: 1rem;">
<button onclick="loadDMUsers()">← Back to DM Users</button>
<span style="margin-left: 1rem; color: #aaa;">${reports.length} reports found</span>
</div>
<div style="display: grid; gap: 1rem;">
`;
reports.forEach(report => {
const sentimentColor =
report.sentiment_score >= 5 ? '#4caf50' :
report.sentiment_score <= -3 ? '#f44336' :
'#2196f3';
const sentimentEmoji =
report.sentiment_score >= 5 ? '😊' :
report.sentiment_score <= -3 ? '😢' :
'😐';
const timestamp = new Date(report.analyzed_at).toLocaleString();
html += `
<div style="background: #2a2a2a; border-left: 4px solid ${sentimentColor}; padding: 1rem; border-radius: 4px;">
<div style="display: flex; justify-content: space-between; align-items: start; margin-bottom: 0.5rem;">
<div>
<h4 style="margin: 0 0 0.25rem 0;">${sentimentEmoji} ${report.username}</h4>
<p style="margin: 0; font-size: 0.85rem; color: #aaa;">User ID: ${report.user_id}</p>
</div>
<div style="text-align: right;">
<div style="font-size: 1.2rem; font-weight: bold; color: ${sentimentColor};">
${report.sentiment_score > 0 ? '+' : ''}${report.sentiment_score}/10
</div>
<div style="font-size: 0.75rem; color: #aaa; text-transform: uppercase;">
${report.overall_sentiment}
</div>
</div>
</div>
<div style="margin: 0.75rem 0; padding: 0.75rem; background: #1e1e1e; border-radius: 4px;">
<strong>Miku's Feelings:</strong>
<p style="margin: 0.5rem 0 0 0; font-style: italic;">"${report.your_feelings}"</p>
</div>
${report.notable_moment ? `
<div style="margin: 0.75rem 0; padding: 0.75rem; background: #1e1e1e; border-radius: 4px;">
<strong>Notable Moment:</strong>
<p style="margin: 0.5rem 0 0 0; font-style: italic;">"${report.notable_moment}"</p>
</div>
` : ''}
${report.key_behaviors && report.key_behaviors.length > 0 ? `
<div style="margin: 0.75rem 0;">
<strong>Key Behaviors:</strong>
<ul style="margin: 0.5rem 0 0 0; padding-left: 1.5rem;">
${report.key_behaviors.slice(0, 5).map(b => `<li>${b}</li>`).join('')}
</ul>
</div>
` : ''}
<div style="margin-top: 0.75rem; padding-top: 0.75rem; border-top: 1px solid #444; font-size: 0.8rem; color: #aaa;">
<span>📅 ${timestamp}</span>
<span style="margin-left: 1rem;">💬 ${report.message_count} messages analyzed</span>
<span style="margin-left: 1rem;">📄 ${report.filename}</span>
</div>
</div>
`;
});
html += '</div>';
container.innerHTML = html;
}
async function loadBlockedUsers() {
try {
const result = await apiCall('/dms/blocked-users');
// Hide DM users list and show blocked users section
document.getElementById('dm-users-list').style.display = 'none';
document.getElementById('blocked-users-section').style.display = 'block';
displayBlockedUsers(result.blocked_users);
} catch (error) {
console.error('Failed to load blocked users:', error);
}
}
function hideBlockedUsers() {
// Show DM users list and hide blocked users section
document.getElementById('dm-users-list').style.display = 'block';
document.getElementById('blocked-users-section').style.display = 'none';
loadDMUsers(); // Refresh DM users
}
function displayBlockedUsers(blockedUsers) {
const container = document.getElementById('blocked-users-list');
if (!blockedUsers || blockedUsers.length === 0) {
container.innerHTML = '<p>No blocked users.</p>';
return;
}
let html = '<div class="blocked-users-grid">';
blockedUsers.forEach(user => {
html += `
<div class="blocked-user-card">
<h4>🚫 ${user.username}</h4>
<p><strong>ID:</strong> ${user.user_id}</p>
<p><strong>Blocked:</strong> ${new Date(user.blocked_at).toLocaleString()}</p>
<p><strong>Blocked by:</strong> ${user.blocked_by}</p>
<div class="blocked-user-actions">
<button onclick="unblockUser('${user.user_id}', '${user.username}')" style="background: #4caf50;">✅ Unblock</button>
</div>
</div>
`;
});
html += '</div>';
container.innerHTML = html;
}
async function exportAllDMs() {
try {
const result = await apiCall('/dms/users');
let exportCount = 0;
for (const user of (result.users || [])) {
try {
await exportUserDMs(user.user_id);
exportCount++;
} catch (e) {
console.error(`Failed to export DMs for user ${user.user_id}:`, e);
}
}
showNotification(`Exported DMs for ${exportCount} users`);
} catch (error) {
console.error('Failed to export all DMs:', error);
}
}

127
bot/static/js/image-gen.js Normal file
View File

@@ -0,0 +1,127 @@
// ============================================================================
// Miku Control Panel — Image Generation Module
// ============================================================================
async function checkImageSystemStatus() {
try {
const statusDisplay = document.getElementById('image-status-display');
statusDisplay.innerHTML = '🔄 Checking system status...';
const result = await apiCall('/image/status');
const workflowStatus = result.workflow_template_exists ? '✅ Found' : '❌ Missing';
const comfyuiStatus = result.comfyui_running ? '✅ Running' : '❌ Not running';
statusDisplay.innerHTML = `
<strong>System Status:</strong>
• Workflow Template (Miku_BasicWorkflow.json): ${workflowStatus}
• ComfyUI Server: ${comfyuiStatus}
${result.comfyui_running ? `• Detected ComfyUI URL: ${result.comfyui_url}` : ''}
<strong>Overall Status:</strong> ${result.ready ? '✅ Ready for image generation' : '⚠️ Setup required'}
${!result.workflow_template_exists ? '⚠️ Place Miku_BasicWorkflow.json in bot directory\n' : ''}${!result.comfyui_running ? '⚠️ Start ComfyUI server on localhost:8188 (bot will auto-detect correct URL)\n' : ''}`;
} catch (error) {
console.error('Failed to check image system status:', error);
document.getElementById('image-status-display').innerHTML = `❌ Error: ${error.message}`;
}
}
async function testImageDetection() {
const message = document.getElementById('detection-test-message').value.trim();
const resultsDiv = document.getElementById('detection-test-results');
if (!message) {
resultsDiv.innerHTML = '❌ Please enter a test message';
resultsDiv.style.color = 'red';
return;
}
try {
resultsDiv.innerHTML = '🔍 Testing detection...';
resultsDiv.style.color = '#4CAF50';
const result = await apiCall('/image/test-detection', 'POST', { message: message });
const detectionIcon = result.is_image_request ? '✅' : '❌';
const detectionText = result.is_image_request ? 'WILL trigger image generation' : 'will NOT trigger image generation';
resultsDiv.innerHTML = `
<strong>Detection Result:</strong> ${detectionIcon} This message ${detectionText}
${result.is_image_request ? `<br><strong>Extracted Prompt:</strong> "${result.extracted_prompt}"` : ''}
<br><strong>Original Message:</strong> "${result.original_message}"`;
resultsDiv.style.color = result.is_image_request ? '#4CAF50' : '#ff9800';
} catch (error) {
console.error('Failed to test image detection:', error);
resultsDiv.innerHTML = `❌ Error: ${error.message}`;
resultsDiv.style.color = 'red';
}
}
async function generateManualImage() {
const prompt = document.getElementById('manual-image-prompt').value.trim();
const statusDiv = document.getElementById('manual-image-status');
const previewDiv = document.getElementById('manual-image-preview');
if (!prompt) {
statusDiv.innerHTML = '❌ Please enter an image prompt';
statusDiv.style.color = 'red';
return;
}
try {
previewDiv.innerHTML = '';
statusDiv.innerHTML = '🎨 Generating image... This may take a few minutes.';
statusDiv.style.color = '#4CAF50';
const result = await apiCall('/image/generate', 'POST', { prompt: prompt });
statusDiv.innerHTML = `✅ Image generated successfully!`;
statusDiv.style.color = '#4CAF50';
if (result.image_path) {
const filename = result.image_path.split('/').pop();
const imageUrl = `/image/view/${encodeURIComponent(filename)}`;
const imgContainer = document.createElement('div');
imgContainer.style.cssText = 'background: #1e1e1e; padding: 1rem; border-radius: 8px; border: 1px solid #333;';
const img = document.createElement('img');
img.src = imageUrl;
img.alt = 'Generated Image';
img.style.cssText = 'max-width: 100%; max-height: 600px; border-radius: 4px; display: block; margin: 0 auto;';
img.onload = function() {
console.log('Image loaded successfully:', imageUrl);
};
img.onerror = function() {
console.error('Failed to load image:', imageUrl);
imgContainer.innerHTML = `
<div style="color: #f44336; padding: 1rem; text-align: center;">
❌ Failed to load image<br>
<span style="font-size: 0.85rem;">Path: ${result.image_path}</span><br>
<span style="font-size: 0.85rem;">URL: ${imageUrl}</span>
</div>
`;
};
imgContainer.appendChild(img);
const pathInfo = document.createElement('div');
pathInfo.style.cssText = 'margin-top: 0.5rem; color: #aaa; font-size: 0.85rem; text-align: center;';
pathInfo.innerHTML = `<strong>File:</strong> ${filename}`;
imgContainer.appendChild(pathInfo);
previewDiv.appendChild(imgContainer);
}
document.getElementById('manual-image-prompt').value = '';
} catch (error) {
console.error('Failed to generate image:', error);
statusDiv.innerHTML = `❌ Error: ${error.message}`;
statusDiv.style.color = 'red';
}
}

446
bot/static/js/memories.js Normal file
View File

@@ -0,0 +1,446 @@
// ============================================================================
// Miku Control Panel — Memory Management Module
// ============================================================================
async function refreshMemoryStats() {
try {
// Fetch Cat status
const statusData = await apiCall('/memory/status');
const indicator = document.getElementById('cat-status-indicator');
const toggleBtn = document.getElementById('cat-toggle-btn');
if (statusData.healthy) {
indicator.innerHTML = `<span style="color: #6fdc6f;">● Connected</span> — ${statusData.url}`;
} else {
indicator.innerHTML = `<span style="color: #ff6b6b;">● Disconnected</span> — ${statusData.url}`;
}
if (statusData.circuit_breaker_active) {
indicator.innerHTML += ` <span style="color: #dcb06f;">(circuit breaker active)</span>`;
}
toggleBtn.textContent = statusData.enabled ? '🐱 Cat: ON' : '😿 Cat: OFF';
toggleBtn.style.background = statusData.enabled ? '#2a7a2a' : '#7a2a2a';
toggleBtn.style.borderColor = statusData.enabled ? '#4a9a4a' : '#9a4a4a';
// Fetch memory stats
const statsData = await apiCall('/memory/stats');
if (statsData.success && statsData.collections) {
const collections = {};
statsData.collections.forEach(c => { collections[c.name] = c.vectors_count; });
document.getElementById('stat-episodic-count').textContent = collections['episodic'] ?? '—';
document.getElementById('stat-declarative-count').textContent = collections['declarative'] ?? '—';
document.getElementById('stat-procedural-count').textContent = collections['procedural'] ?? '—';
} else {
document.getElementById('stat-episodic-count').textContent = '—';
document.getElementById('stat-declarative-count').textContent = '—';
document.getElementById('stat-procedural-count').textContent = '—';
}
} catch (err) {
console.error('Error refreshing memory stats:', err);
document.getElementById('cat-status-indicator').innerHTML = '<span style="color: #ff6b6b;">● Error checking status</span>';
}
}
async function toggleCatIntegration() {
try {
const statusData = await apiCall('/memory/status');
const newState = !statusData.enabled;
const formData = new FormData();
formData.append('enabled', newState);
const res = await fetch('/memory/toggle', { method: 'POST', body: formData });
const data = await res.json();
if (data.success) {
showNotification(`Cheshire Cat ${newState ? 'enabled' : 'disabled'}`, newState ? 'success' : 'info');
refreshMemoryStats();
}
} catch (err) {
showNotification('Failed to toggle Cat integration', 'error');
}
}
async function triggerConsolidation() {
const btn = document.getElementById('consolidate-btn');
const status = document.getElementById('consolidation-status');
const resultDiv = document.getElementById('consolidation-result');
btn.disabled = true;
btn.textContent = '⏳ Running...';
status.textContent = 'Consolidation in progress (this may take a few minutes)...';
resultDiv.style.display = 'none';
try {
const data = await apiCall('/memory/consolidate', 'POST');
if (data.success) {
status.textContent = '✅ Consolidation complete!';
status.style.color = '#6fdc6f';
resultDiv.textContent = data.result || 'Consolidation finished successfully.';
resultDiv.style.display = 'block';
showNotification('Memory consolidation complete', 'success');
refreshMemoryStats();
} else {
status.textContent = '❌ ' + (data.error || 'Consolidation failed');
status.style.color = '#ff6b6b';
}
} catch (err) {
status.textContent = '❌ Error: ' + err.message;
status.style.color = '#ff6b6b';
} finally {
btn.disabled = false;
btn.textContent = '🌙 Run Consolidation';
}
}
async function loadFacts() {
const listDiv = document.getElementById('facts-list');
listDiv.innerHTML = '<div style="text-align: center; color: #888; padding: 1rem;">Loading facts...</div>';
try {
const data = await apiCall('/memory/facts');
if (!data.success || data.count === 0) {
listDiv.innerHTML = '<div style="text-align: center; color: #666; padding: 2rem;">No declarative facts stored yet.</div>';
return;
}
let html = '';
data.facts.forEach((fact, i) => {
const source = fact.metadata?.source || 'unknown';
const when = fact.metadata?.when ? new Date(fact.metadata.when * 1000).toLocaleString() : 'unknown';
const factDataJson = escapeJsonForAttribute(fact);
html += `
<div class="memory-item" style="background: #242424; padding: 0.6rem 0.8rem; margin-bottom: 0.4rem; border-radius: 4px; border-left: 3px solid #2a9955; display: flex; justify-content: space-between; align-items: flex-start;">
<div style="flex: 1;">
<div style="color: #ddd; font-size: 0.9rem;">${escapeHtml(fact.content)}</div>
<div style="color: #666; font-size: 0.75rem; margin-top: 0.3rem;">
Source: ${escapeHtml(source)} · ${when}
</div>
</div>
<div style="display: flex; gap: 0.3rem; flex-shrink: 0;">
<button data-memory='${factDataJson}' onclick='showEditMemoryModalFromButton(this, "declarative", "${fact.id}")'
style="background: none; border: none; color: #5599cc; cursor: pointer; padding: 0.2rem 0.4rem; font-size: 0.85rem;"
title="Edit this fact">✏️</button>
<button onclick="deleteMemoryPoint('declarative', '${fact.id}', this)"
style="background: none; border: none; color: #993333; cursor: pointer; padding: 0.2rem 0.4rem; font-size: 0.85rem;"
title="Delete this fact">🗑️</button>
</div>
</div>`;
});
listDiv.innerHTML = `<div style="color: #888; font-size: 0.8rem; margin-bottom: 0.5rem;">${data.count} facts loaded</div>` + html;
} catch (err) {
listDiv.innerHTML = `<div style="color: #ff6b6b; padding: 1rem;">Error loading facts: ${err.message}</div>`;
}
}
async function loadEpisodicMemories() {
const listDiv = document.getElementById('episodic-list');
listDiv.innerHTML = '<div style="text-align: center; color: #888; padding: 1rem;">Loading memories...</div>';
try {
const data = await apiCall('/memory/episodic');
if (!data.success || data.count === 0) {
listDiv.innerHTML = '<div style="text-align: center; color: #666; padding: 2rem;">No episodic memories stored yet.</div>';
return;
}
let html = '';
data.memories.forEach((mem, i) => {
const source = mem.metadata?.source || 'unknown';
const when = mem.metadata?.when ? new Date(mem.metadata.when * 1000).toLocaleString() : 'unknown';
const memDataJson = escapeJsonForAttribute(mem);
html += `
<div class="memory-item" style="background: #242424; padding: 0.6rem 0.8rem; margin-bottom: 0.4rem; border-radius: 4px; border-left: 3px solid #2a5599; display: flex; justify-content: space-between; align-items: flex-start;">
<div style="flex: 1;">
<div style="color: #ddd; font-size: 0.9rem;">${escapeHtml(mem.content)}</div>
<div style="color: #666; font-size: 0.75rem; margin-top: 0.3rem;">
Source: ${escapeHtml(source)} · ${when}
</div>
</div>
<div style="display: flex; gap: 0.3rem; flex-shrink: 0;">
<button data-memory='${memDataJson}' onclick='showEditMemoryModalFromButton(this, "episodic", "${mem.id}")'
style="background: none; border: none; color: #5599cc; cursor: pointer; padding: 0.2rem 0.4rem; font-size: 0.85rem;"
title="Edit this memory">✏️</button>
<button onclick="deleteMemoryPoint('episodic', '${mem.id}', this)"
style="background: none; border: none; color: #993333; cursor: pointer; padding: 0.2rem 0.4rem; font-size: 0.85rem;"
title="Delete this memory">🗑️</button>
</div>
</div>`;
});
listDiv.innerHTML = `<div style="color: #888; font-size: 0.8rem; margin-bottom: 0.5rem;">${data.count} memories loaded</div>` + html;
} catch (err) {
listDiv.innerHTML = `<div style="color: #ff6b6b; padding: 1rem;">Error loading memories: ${err.message}</div>`;
}
}
async function deleteMemoryPoint(collection, pointId, btnElement) {
if (!confirm(`Delete this ${collection} memory point?`)) return;
try {
const data = await apiCall(`/memory/point/${collection}/${pointId}`, 'DELETE');
if (data.success) {
// Remove the row from the UI
const row = btnElement.closest('div[style*="margin-bottom"]');
if (row) row.remove();
showNotification('Memory point deleted', 'success');
refreshMemoryStats();
} else {
showNotification('Failed to delete: ' + (data.error || 'Unknown error'), 'error');
}
} catch (err) {
console.error('Failed to delete memory point:', err);
}
}
// Delete All Memories — Multi-step confirmation flow
function onDeleteStep1Change() {
const checked = document.getElementById('delete-checkbox-1').checked;
document.getElementById('delete-step-2').style.display = checked ? 'block' : 'none';
if (!checked) {
document.getElementById('delete-checkbox-2').checked = false;
document.getElementById('delete-step-3').style.display = 'none';
document.getElementById('delete-step-final').style.display = 'none';
document.getElementById('delete-confirmation-input').value = '';
}
}
function onDeleteStep2Change() {
const checked = document.getElementById('delete-checkbox-2').checked;
document.getElementById('delete-step-3').style.display = checked ? 'block' : 'none';
document.getElementById('delete-step-final').style.display = checked ? 'block' : 'none';
if (!checked) {
document.getElementById('delete-confirmation-input').value = '';
updateDeleteButton();
}
}
function onDeleteInputChange() {
updateDeleteButton();
}
function updateDeleteButton() {
const input = document.getElementById('delete-confirmation-input').value;
const expected = "Yes, I am deleting Miku's memories fully.";
const btn = document.getElementById('delete-all-btn');
const match = input === expected;
btn.disabled = !match;
btn.style.cursor = match ? 'pointer' : 'not-allowed';
btn.style.opacity = match ? '1' : '0.5';
}
async function executeDeleteAllMemories() {
const input = document.getElementById('delete-confirmation-input').value;
const expected = "Yes, I am deleting Miku's memories fully.";
if (input !== expected) {
showNotification('Confirmation string does not match', 'error');
return;
}
const btn = document.getElementById('delete-all-btn');
btn.disabled = true;
btn.textContent = '⏳ Deleting...';
try {
const data = await apiCall('/memory/delete', 'POST', { confirmation: input });
if (data.success) {
showNotification('All memories have been permanently deleted', 'success');
resetDeleteFlow();
refreshMemoryStats();
} else {
showNotification('Deletion failed: ' + (data.error || 'Unknown error'), 'error');
}
} catch (err) {
console.error('Failed to delete all memories:', err);
} finally {
btn.disabled = false;
btn.textContent = '🗑️ Permanently Delete All Memories';
}
}
function resetDeleteFlow() {
document.getElementById('delete-checkbox-1').checked = false;
document.getElementById('delete-checkbox-2').checked = false;
document.getElementById('delete-confirmation-input').value = '';
document.getElementById('delete-step-2').style.display = 'none';
document.getElementById('delete-step-3').style.display = 'none';
document.getElementById('delete-step-final').style.display = 'none';
updateDeleteButton();
}
// Memory Edit/Create Modal Functions
// currentEditMemory declared in core.js
function showEditMemoryModalFromButton(button, collection, pointId) {
const memoryJson = button.getAttribute('data-memory');
// Unescape HTML entities back to JSON
const unescapedJson = memoryJson
.replace(/&quot;/g, '"')
.replace(/&apos;/g, "'")
.replace(/&lt;/g, '<')
.replace(/&gt;/g, '>')
.replace(/&amp;/g, '&');
const memory = JSON.parse(unescapedJson);
showEditMemoryModal(collection, pointId, memory);
}
function showEditMemoryModal(collection, pointId, memoryData) {
const memory = typeof memoryData === 'string' ? JSON.parse(memoryData) : memoryData;
currentEditMemory = { collection, pointId, memory };
const modal = document.getElementById('edit-memory-modal');
const contentField = document.getElementById('edit-memory-content');
const sourceField = document.getElementById('edit-memory-source');
contentField.value = memory.content || '';
sourceField.value = memory.metadata?.source || '';
modal.style.display = 'flex';
}
function closeEditMemoryModal() {
document.getElementById('edit-memory-modal').style.display = 'none';
currentEditMemory = null;
}
async function saveMemoryEdit() {
if (!currentEditMemory) return;
const content = document.getElementById('edit-memory-content').value.trim();
const source = document.getElementById('edit-memory-source').value.trim();
if (!content) {
showNotification('Content cannot be empty', 'error');
return;
}
const { collection, pointId } = currentEditMemory;
const saveBtn = document.querySelector('#edit-memory-modal button[onclick="saveMemoryEdit()"]');
saveBtn.disabled = true;
saveBtn.textContent = 'Saving...';
try {
const data = await apiCall(`/memory/point/${collection}/${pointId}`, 'PUT', {
content: content,
metadata: { source: source || 'manual_edit' }
});
if (data.success) {
showNotification('Memory updated successfully', 'success');
closeEditMemoryModal();
// Reload the appropriate list
if (collection === 'declarative') {
loadFacts();
} else if (collection === 'episodic') {
loadEpisodicMemories();
}
} else {
showNotification('Failed to update: ' + (data.error || 'Unknown error'), 'error');
}
} catch (err) {
console.error('Failed to save memory edit:', err);
} finally {
saveBtn.disabled = false;
saveBtn.textContent = 'Save Changes';
}
}
function showCreateMemoryModal(collection) {
const modal = document.getElementById('create-memory-modal');
document.getElementById('create-memory-collection').value = collection;
document.getElementById('create-memory-content').value = '';
document.getElementById('create-memory-user-id').value = '';
document.getElementById('create-memory-source').value = 'manual';
// Update modal title based on collection type
const title = collection === 'declarative' ? 'Add New Fact' : 'Add New Memory';
document.querySelector('#create-memory-modal h3').textContent = title;
modal.style.display = 'flex';
}
function closeCreateMemoryModal() {
document.getElementById('create-memory-modal').style.display = 'none';
}
// Modal keyboard and backdrop close handlers
document.addEventListener('keydown', function(e) {
if (e.key === 'Escape') {
const editModal = document.getElementById('edit-memory-modal');
const createModal = document.getElementById('create-memory-modal');
if (editModal && editModal.style.display !== 'none') closeEditMemoryModal();
if (createModal && createModal.style.display !== 'none') closeCreateMemoryModal();
}
});
async function saveNewMemory() {
const collection = document.getElementById('create-memory-collection').value;
const content = document.getElementById('create-memory-content').value.trim();
const userId = document.getElementById('create-memory-user-id').value.trim();
const source = document.getElementById('create-memory-source').value.trim();
if (!content) {
showNotification('Content cannot be empty', 'error');
return;
}
const createBtn = document.querySelector('#create-memory-modal button[onclick="saveNewMemory()"]');
createBtn.disabled = true;
createBtn.textContent = 'Creating...';
try {
const data = await apiCall('/memory/create', 'POST', {
collection: collection,
content: content,
user_id: userId || null,
source: source || 'manual',
metadata: {}
});
if (data.success) {
showNotification(`${collection === 'declarative' ? 'Fact' : 'Memory'} created successfully`, 'success');
closeCreateMemoryModal();
// Reload the appropriate list
if (collection === 'declarative') {
loadFacts();
} else if (collection === 'episodic') {
loadEpisodicMemories();
}
refreshMemoryStats();
} else {
showNotification('Failed to create: ' + (data.error || 'Unknown error'), 'error');
}
} catch (err) {
console.error('Failed to save new memory:', err);
} finally {
createBtn.disabled = false;
createBtn.textContent = 'Create Memory';
}
}
// Search/Filter Function
function filterMemories(listId, searchTerm) {
const listDiv = document.getElementById(listId);
const items = listDiv.querySelectorAll('.memory-item');
const term = searchTerm.toLowerCase().trim();
items.forEach(item => {
const content = item.textContent.toLowerCase();
if (term === '' || content.includes(term)) {
item.style.display = 'flex';
} else {
item.style.display = 'none';
}
});
}

396
bot/static/js/modes.js Normal file
View File

@@ -0,0 +1,396 @@
// ============================================================================
// Miku Control Panel — Modes Module
// Evil Mode, GPU Selection, Bipolar Mode
// ============================================================================
// ===== Evil Mode Functions =====
async function checkEvilModeStatus() {
try {
const result = await apiCall('/evil-mode');
evilMode = result.evil_mode;
updateEvilModeUI();
if (evilMode && result.mood) {
const moodSelect = document.getElementById('mood');
moodSelect.value = result.mood;
}
} catch (error) {
console.error('Failed to check evil mode status:', error);
}
}
async function toggleEvilMode() {
try {
const toggleBtn = document.getElementById('evil-mode-toggle');
toggleBtn.disabled = true;
toggleBtn.textContent = '⏳ Switching...';
const result = await apiCall('/evil-mode/toggle', 'POST');
evilMode = result.evil_mode;
updateEvilModeUI();
if (evilMode) {
showNotification('😈 Evil Mode enabled! Evil Miku has awakened...');
} else {
showNotification('🎤 Evil Mode disabled. Normal Miku is back!');
}
} catch (error) {
console.error('Failed to toggle evil mode:', error);
showNotification('Failed to toggle evil mode: ' + error.message, 'error');
}
}
function updateEvilModeUI() {
const body = document.body;
const title = document.getElementById('panel-title');
const toggleBtn = document.getElementById('evil-mode-toggle');
const moodSelect = document.getElementById('mood');
if (evilMode) {
body.classList.add('evil-mode');
title.textContent = 'Evil Miku Control Panel';
toggleBtn.textContent = '😈 Evil Mode: ON';
toggleBtn.disabled = false;
moodSelect.innerHTML = `
<option value="aggressive">👿 aggressive</option>
<option value="bored">🥱 bored</option>
<option value="contemptuous">👑 contemptuous</option>
<option value="cunning">🐍 cunning</option>
<option value="evil_neutral" selected>evil neutral</option>
<option value="jealous">💚 jealous</option>
<option value="manic">🤪 manic</option>
<option value="melancholic">🌑 melancholic</option>
<option value="playful_cruel">🎭 playful cruel</option>
<option value="sarcastic">😈 sarcastic</option>
`;
} else {
body.classList.remove('evil-mode');
title.textContent = 'Miku Control Panel';
toggleBtn.textContent = '😈 Evil Mode: OFF';
toggleBtn.disabled = false;
moodSelect.innerHTML = `
<option value="angry">💢 angry</option>
<option value="asleep">💤 asleep</option>
<option value="bubbly">🫧 bubbly</option>
<option value="curious">👀 curious</option>
<option value="excited">✨ excited</option>
<option value="flirty">🫦 flirty</option>
<option value="irritated">😒 irritated</option>
<option value="melancholy">🍷 melancholy</option>
<option value="neutral" selected>neutral</option>
<option value="romantic">💌 romantic</option>
<option value="serious">👔 serious</option>
<option value="shy">👉👈 shy</option>
<option value="silly">🪿 silly</option>
<option value="sleepy">🌙 sleepy</option>
`;
}
updateBipolarToggleVisibility();
}
// ===== GPU Selection Management =====
async function checkGPUStatus() {
try {
const data = await apiCall('/gpu-status');
selectedGPU = data.gpu || 'nvidia';
updateGPUUI();
} catch (error) {
console.error('Failed to check GPU status:', error);
}
}
async function toggleGPU() {
try {
const toggleBtn = document.getElementById('gpu-selector-toggle');
toggleBtn.disabled = true;
toggleBtn.textContent = '⏳ Switching...';
const result = await apiCall('/gpu-select', 'POST', {
gpu: selectedGPU === 'nvidia' ? 'amd' : 'nvidia'
});
selectedGPU = result.gpu;
updateGPUUI();
const gpuName = selectedGPU === 'nvidia' ? 'NVIDIA GTX 1660' : 'AMD RX 6800';
showNotification(`🎮 Switched to ${gpuName}!`);
} catch (error) {
console.error('Failed to toggle GPU:', error);
showNotification('Failed to switch GPU: ' + error.message, 'error');
toggleBtn.disabled = false;
}
}
function updateGPUUI() {
const toggleBtn = document.getElementById('gpu-selector-toggle');
if (selectedGPU === 'amd') {
toggleBtn.textContent = '🎮 GPU: AMD';
toggleBtn.style.background = '#c91432';
toggleBtn.style.borderColor = '#e91436';
} else {
toggleBtn.textContent = '🎮 GPU: NVIDIA';
toggleBtn.style.background = '#2a5599';
toggleBtn.style.borderColor = '#4a7bc9';
}
toggleBtn.disabled = false;
}
// ===== Bipolar Mode Management =====
async function checkBipolarModeStatus() {
try {
const data = await apiCall('/bipolar-mode');
bipolarMode = data.bipolar_mode;
updateBipolarModeUI();
} catch (error) {
console.error('Failed to check bipolar mode status:', error);
}
}
async function toggleBipolarMode() {
try {
const toggleBtn = document.getElementById('bipolar-mode-toggle');
toggleBtn.disabled = true;
toggleBtn.textContent = '⏳ Switching...';
const result = await apiCall('/bipolar-mode/toggle', 'POST');
bipolarMode = result.bipolar_mode;
updateBipolarModeUI();
if (bipolarMode) {
showNotification('🔄 Bipolar Mode enabled! Both Mikus can now argue...');
} else {
showNotification('🔄 Bipolar Mode disabled.');
}
} catch (error) {
console.error('Failed to toggle bipolar mode:', error);
showNotification('Failed to toggle bipolar mode: ' + error.message, 'error');
}
}
function updateBipolarModeUI() {
const toggleBtn = document.getElementById('bipolar-mode-toggle');
const bipolarSection = document.getElementById('bipolar-section');
if (bipolarMode) {
toggleBtn.textContent = '🔄 Bipolar: ON';
toggleBtn.style.background = '#9932CC';
toggleBtn.style.borderColor = '#9932CC';
toggleBtn.disabled = false;
if (bipolarSection) {
bipolarSection.style.display = 'block';
loadScoreboard();
}
} else {
toggleBtn.textContent = '🔄 Bipolar: OFF';
toggleBtn.style.background = '#333';
toggleBtn.style.borderColor = '#666';
toggleBtn.disabled = false;
if (bipolarSection) {
bipolarSection.style.display = 'none';
}
}
}
function updateBipolarToggleVisibility() {
const bipolarToggle = document.getElementById('bipolar-mode-toggle');
bipolarToggle.style.display = 'block';
}
async function triggerPersonaDialogue() {
const messageIdInput = document.getElementById('dialogue-message-id').value.trim();
const statusDiv = document.getElementById('dialogue-status');
if (!messageIdInput) {
showNotification('Please enter a message ID', 'error');
return;
}
if (!/^\d+$/.test(messageIdInput)) {
showNotification('Invalid message ID format - should be a number', 'error');
return;
}
try {
statusDiv.innerHTML = '<span style="color: #6B8EFF;">⏳ Analyzing message for dialogue trigger...</span>';
const requestBody = {
message_id: messageIdInput
};
const result = await apiCall('/bipolar-mode/trigger-dialogue', 'POST', requestBody);
if (result.status === 'error') {
statusDiv.innerHTML = `<span style="color: #ff4444;">❌ ${result.message}</span>`;
showNotification(result.message, 'error');
return;
}
statusDiv.innerHTML = `<span style="color: #00ff00;">✅ ${result.message}</span>`;
showNotification(`💬 ${result.message}`);
document.getElementById('dialogue-message-id').value = '';
} catch (error) {
statusDiv.innerHTML = `<span style="color: #ff4444;">❌ Failed to trigger dialogue: ${error.message}</span>`;
showNotification(`Error: ${error.message}`, 'error');
}
}
async function triggerBipolarArgument() {
const channelIdInput = document.getElementById('bipolar-channel-id').value.trim();
const messageIdInput = document.getElementById('bipolar-message-id').value.trim();
const context = document.getElementById('bipolar-context').value.trim();
const statusDiv = document.getElementById('bipolar-status');
if (!channelIdInput) {
showNotification('Please enter a channel ID', 'error');
return;
}
if (!/^\d+$/.test(channelIdInput)) {
showNotification('Invalid channel ID format - should be a number', 'error');
return;
}
if (messageIdInput && !/^\d+$/.test(messageIdInput)) {
showNotification('Invalid message ID format - should be a number', 'error');
return;
}
try {
statusDiv.innerHTML = '<span style="color: #9932CC;">⏳ Triggering argument...</span>';
const requestBody = {
channel_id: channelIdInput,
context: context
};
if (messageIdInput) {
requestBody.message_id = messageIdInput;
}
const result = await apiCall('/bipolar-mode/trigger-argument', 'POST', requestBody);
if (result.status === 'error') {
statusDiv.innerHTML = `<span style="color: #ff4444;">❌ ${result.message}</span>`;
showNotification(result.message, 'error');
return;
}
statusDiv.innerHTML = `<span style="color: #00ff00;">✅ ${result.message}</span>`;
showNotification(`⚔️ Argument triggered!`);
document.getElementById('bipolar-context').value = '';
document.getElementById('bipolar-message-id').value = '';
loadActiveArguments();
loadScoreboard();
} catch (error) {
statusDiv.innerHTML = `<span style="color: #ff4444;">❌ ${error.message}</span>`;
showNotification('Failed to trigger argument: ' + error.message, 'error');
}
}
async function loadScoreboard() {
const scoreboardContent = document.getElementById('scoreboard-content');
try {
const result = await apiCall('/bipolar-mode/scoreboard', 'GET');
if (result.status === 'error') {
scoreboardContent.innerHTML = `<p style="color: #ff4444;">Failed to load scoreboard</p>`;
return;
}
const { scoreboard } = result;
const total = scoreboard.total_arguments;
if (total === 0) {
scoreboardContent.innerHTML = `<p style="color: #888;">No arguments have been judged yet.</p>`;
return;
}
const mikuPct = total > 0 ? ((scoreboard.miku_wins / total) * 100).toFixed(1) : 0;
const evilPct = total > 0 ? ((scoreboard.evil_wins / total) * 100).toFixed(1) : 0;
let html = `
<div style="display: flex; justify-content: space-between; margin-bottom: 0.8rem;">
<div style="text-align: center; flex: 1;">
<div style="color: #86cecb; font-size: 1.2rem; font-weight: bold;">${scoreboard.miku_wins}</div>
<div style="color: #888; font-size: 0.85rem;">Hatsune Miku</div>
<div style="color: #999; font-size: 0.75rem;">${mikuPct}%</div>
</div>
<div style="align-self: center; color: #666; font-size: 1.2rem;">vs</div>
<div style="text-align: center; flex: 1;">
<div style="color: #D60004; font-size: 1.2rem; font-weight: bold;">${scoreboard.evil_wins}</div>
<div style="color: #888; font-size: 0.85rem;">Evil Miku</div>
<div style="color: #999; font-size: 0.75rem;">${evilPct}%</div>
</div>
</div>
<div style="text-align: center; color: #aaa; font-size: 0.85rem; border-top: 1px solid #333; padding-top: 0.5rem;">
Total Arguments: ${total}
</div>
`;
if (scoreboard.history && scoreboard.history.length > 0) {
html += `<div style="margin-top: 0.8rem; padding-top: 0.8rem; border-top: 1px solid #333;">
<div style="color: #888; font-size: 0.8rem; margin-bottom: 0.3rem;">Recent Results:</div>`;
scoreboard.history.reverse().forEach(entry => {
const winnerName = entry.winner === 'evil' ? 'Evil Miku' : 'Hatsune Miku';
const winnerColor = entry.winner === 'evil' ? '#D60004' : '#86cecb';
const date = new Date(entry.timestamp).toLocaleString();
html += `<div style="font-size: 0.75rem; color: #666; margin-bottom: 0.2rem;">
<span style="color: ${winnerColor};">🏆 ${winnerName}</span> (${entry.exchanges} exchanges) - ${date}
</div>`;
});
html += `</div>`;
}
scoreboardContent.innerHTML = html;
} catch (error) {
scoreboardContent.innerHTML = `<p style="color: #ff4444;">Error loading scoreboard</p>`;
console.error('Scoreboard error:', error);
}
}
async function loadActiveArguments() {
try {
const data = await apiCall('/bipolar-mode/arguments');
const container = document.getElementById('active-arguments');
const list = document.getElementById('active-arguments-list');
if (Object.keys(data.active_arguments).length > 0) {
container.style.display = 'block';
list.innerHTML = '';
for (const [channelId, argData] of Object.entries(data.active_arguments)) {
const div = document.createElement('div');
div.style.background = '#2a2a3e';
div.style.padding = '0.5rem';
div.style.marginBottom = '0.5rem';
div.style.borderRadius = '4px';
div.innerHTML = `
<strong>#${argData.channel_name}</strong><br>
<small>Exchanges: ${argData.exchange_count} | Speaker: ${argData.current_speaker}</small>
`;
list.appendChild(div);
}
} else {
container.style.display = 'none';
}
} catch (error) {
console.error('Failed to load active arguments:', error);
}
}

1127
bot/static/js/profile.js Normal file

File diff suppressed because it is too large Load Diff

684
bot/static/js/servers.js Normal file
View File

@@ -0,0 +1,684 @@
// ===== Server Management Functions =====
async function loadServers() {
try {
console.log('🎭 Loading servers...');
const data = await apiCall('/servers');
console.log('🎭 Servers response:', data);
if (data.servers) {
servers = data.servers;
console.log(`🎭 Loaded ${servers.length} servers:`, servers);
// Debug: Log each server's guild_id
servers.forEach((server, index) => {
console.log(`🎭 Server ${index}: guild_id = ${server.guild_id}, name = ${server.guild_name}`);
});
// Debug: Show raw response data
console.log('🎭 Raw API response data:', JSON.stringify(data, null, 2));
// Display servers
displayServers();
populateServerDropdowns();
populateMoodDropdowns(); // Populate mood dropdowns after servers are loaded
} else {
console.warn('🎭 No servers found in response');
servers = [];
}
} catch (error) {
console.error('🎭 Failed to load servers:', error);
servers = [];
}
}
function displayServers() {
const container = document.getElementById('servers-list');
if (servers.length === 0) {
container.innerHTML = '<p>No servers configured</p>';
return;
}
container.innerHTML = servers.map(server => `
<div class="server-card">
<div class="server-header">
<div class="server-name">${server.guild_name}</div>
<div class="server-actions">
<button onclick="editServer('${String(server.guild_id)}')">Edit</button>
<button onclick="removeServer('${String(server.guild_id)}')" style="background: #d32f2f;">Remove</button>
</div>
</div>
<div><strong>Guild ID:</strong> ${server.guild_id}</div>
<div><strong>Autonomous Channel:</strong> #${server.autonomous_channel_name} (${server.autonomous_channel_id})</div>
<div><strong>Bedtime Channels:</strong> ${server.bedtime_channel_ids.join(', ')}</div>
<div><strong>Features:</strong>
${server.enabled_features.map(feature => `<span class="feature-tag">${feature}</span>`).join('')}
</div>
<div><strong>Autonomous Interval:</strong> ${server.autonomous_interval_minutes} minutes</div>
<div><strong>Conversation Detection:</strong> ${server.conversation_detection_interval_minutes} minutes</div>
<div><strong>Bedtime Range:</strong> ${String(server.bedtime_hour || 21).padStart(2, '0')}:${String(server.bedtime_minute || 0).padStart(2, '0')} - ${String(server.bedtime_hour_end || 23).padStart(2, '0')}:${String(server.bedtime_minute_end || 59).padStart(2, '0')}</div>
<!-- Bedtime Configuration -->
<div style="margin-top: 1rem; padding: 1rem; background: #2a2a2a; border-radius: 4px;">
<h4 style="margin: 0 0 0.5rem 0; color: #61dafb;">Bedtime Settings</h4>
<div style="display: grid; grid-template-columns: 1fr 1fr; gap: 0.5rem; margin-bottom: 0.5rem;">
<div>
<label style="display: block; font-size: 0.9rem; margin-bottom: 0.2rem;">Start Time:</label>
<input type="time" id="bedtime-start-${String(server.guild_id)}" value="${String(server.bedtime_hour || 21).padStart(2, '0')}:${String(server.bedtime_minute || 0).padStart(2, '0')}" style="padding: 0.3rem; background: #333; color: white; border: 1px solid #555; border-radius: 3px; width: 100%;">
</div>
<div>
<label style="display: block; font-size: 0.9rem; margin-bottom: 0.2rem;">End Time:</label>
<input type="time" id="bedtime-end-${String(server.guild_id)}" value="${String(server.bedtime_hour_end || 23).padStart(2, '0')}:${String(server.bedtime_minute_end || 59).padStart(2, '0')}" style="padding: 0.3rem; background: #333; color: white; border: 1px solid #555; border-radius: 3px; width: 100%;">
</div>
</div>
<button onclick="updateBedtimeRange('${String(server.guild_id)}')" style="background: #4caf50;">Update Bedtime Range</button>
</div>
<!-- Per-Server Mood Display -->
<div style="margin-top: 1rem; padding: 1rem; background: #2a2a2a; border-radius: 4px;">
<h4 style="margin: 0 0 0.5rem 0; color: #61dafb;">Server Mood</h4>
<div><strong>Current Mood:</strong> ${server.current_mood_name || 'neutral'} ${MOOD_EMOJIS[server.current_mood_name] || ''}</div>
<div><strong>Sleeping:</strong> ${server.is_sleeping ? 'Yes' : 'No'}</div>
<div style="margin-top: 0.5rem;">
<select id="mood-select-${String(server.guild_id)}" style="margin-right: 0.5rem; padding: 0.3rem; background: #333; color: white; border: 1px solid #555; border-radius: 3px;">
<option value="">Select Mood...</option>
</select>
<button onclick="setServerMood('${String(server.guild_id)}')" style="margin-right: 0.5rem;">Change Mood</button>
<button onclick="resetServerMood('${String(server.guild_id)}')" style="background: #ff9800;">Reset Mood</button>
</div>
</div>
</div>
`).join('');
// Debug: Log what element IDs were created
console.log('🎭 Server cards rendered. Checking for mood-select elements:');
document.querySelectorAll('[id^="mood-select-"]').forEach(el => {
console.log(`🎭 Found mood-select element: ${el.id}`);
});
// Populate mood dropdowns after server cards are created
populateMoodDropdowns();
}
async function populateServerDropdowns() {
const serverSelect = document.getElementById('server-select');
const manualServerSelect = document.getElementById('manual-server-select');
const customPromptServerSelect = document.getElementById('custom-prompt-server-select');
// Clear existing options except "All Servers"
serverSelect.innerHTML = '<option value="all">All Servers</option>';
manualServerSelect.innerHTML = '<option value="all">All Servers</option>';
customPromptServerSelect.innerHTML = '<option value="all">All Servers</option>';
console.log('🎭 Populating server dropdowns with', servers.length, 'servers');
// Add server options
servers.forEach(server => {
console.log(`🎭 Adding server to dropdown: ${server.guild_name} (guild_id: ${server.guild_id}, type: ${typeof server.guild_id})`);
const option = document.createElement('option');
option.value = server.guild_id;
option.textContent = server.guild_name;
serverSelect.appendChild(option.cloneNode(true));
manualServerSelect.appendChild(option);
customPromptServerSelect.appendChild(option.cloneNode(true));
});
// Debug: Check what's actually in the manual-server-select dropdown
console.log('🎭 manual-server-select options:');
Array.from(manualServerSelect.options).forEach((opt, idx) => {
console.log(` [${idx}] value="${opt.value}" text="${opt.textContent}"`);
});
// Populate autonomous stats dropdown
populateAutonomousServerDropdown();
}
// Figurine subscribers UI functions (must be global for onclick handlers)
async function refreshFigurineSubscribers() {
try {
console.log('🔄 Figurines: Fetching subscribers...');
const data = await apiCall('/figurines/subscribers');
console.log('📋 Figurines: Received subscribers:', data);
displayFigurineSubscribers(data.subscribers || []);
showNotification('Subscribers refreshed');
} catch (e) {
console.error('❌ Figurines: Failed to fetch subscribers:', e);
}
}
function displayFigurineSubscribers(subscribers) {
const container = document.getElementById('figurine-subscribers-list');
if (!container) return;
if (!subscribers.length) {
container.innerHTML = '<p>No subscribers yet.</p>';
return;
}
let html = '<ul>';
subscribers.forEach(uid => {
const uidStr = String(uid);
html += `<li><code>${uidStr}</code> <button onclick="removeFigurineSubscriber('${uidStr}')">Remove</button></li>`;
});
html += '</ul>';
container.innerHTML = html;
}
async function addFigurineSubscriber() {
try {
console.log(' Figurines: Adding subscriber...');
const uid = document.getElementById('figurine-user-id').value.trim();
if (!uid) {
showNotification('Enter a user ID', 'error');
return;
}
const form = new FormData();
form.append('user_id', uid);
const res = await fetch('/figurines/subscribers', { method: 'POST', body: form });
const data = await res.json();
console.log(' Figurines: Add subscriber response:', data);
if (data.status === 'ok') {
showNotification('Subscriber added');
document.getElementById('figurine-user-id').value = '';
refreshFigurineSubscribers();
} else {
showNotification(data.message || 'Failed to add subscriber', 'error');
}
} catch (e) {
console.error('❌ Figurines: Failed to add subscriber:', e);
showNotification('Failed to add subscriber', 'error');
}
}
async function removeFigurineSubscriber(uid) {
try {
console.log(`🗑️ Figurines: Removing subscriber ${uid}...`);
const data = await apiCall(`/figurines/subscribers/${uid}`, 'DELETE');
console.log('🗑️ Figurines: Remove subscriber response:', data);
if (data.status === 'ok') {
showNotification('Subscriber removed');
refreshFigurineSubscribers();
} else {
showNotification(data.message || 'Failed to remove subscriber', 'error');
}
} catch (e) {
console.error('❌ Figurines: Failed to remove subscriber:', e);
}
}
async function sendFigurineNowToAll() {
try {
console.log('📨 Figurines: Triggering send to all subscribers...');
const tweetUrl = document.getElementById('figurine-tweet-url-all').value.trim();
const statusDiv = document.getElementById('figurine-all-status');
statusDiv.textContent = 'Sending...';
statusDiv.style.color = evilMode ? '#ff4444' : '#007bff';
const formData = new FormData();
if (tweetUrl) {
formData.append('tweet_url', tweetUrl);
}
const res = await fetch('/figurines/send_now', {
method: 'POST',
body: formData
});
const data = await res.json();
console.log('📨 Figurines: Send to all response:', data);
if (data.status === 'ok') {
showNotification('Figurine DMs queued for all subscribers');
statusDiv.textContent = 'Queued successfully';
statusDiv.style.color = '#28a745';
document.getElementById('figurine-tweet-url-all').value = ''; // Clear input
} else {
showNotification(data.message || 'Bot not ready', 'error');
statusDiv.textContent = 'Failed: ' + (data.message || 'Unknown error');
statusDiv.style.color = '#dc3545';
}
} catch (e) {
console.error('❌ Figurines: Failed to queue figurine DMs for all:', e);
showNotification('Failed to queue figurine DMs', 'error');
document.getElementById('figurine-all-status').textContent = 'Error: ' + e.message;
document.getElementById('figurine-all-status').style.color = '#dc3545';
}
}
async function sendFigurineToSingleUser() {
try {
const userId = document.getElementById('figurine-single-user-id').value.trim();
const tweetUrl = document.getElementById('figurine-tweet-url-single').value.trim();
const statusDiv = document.getElementById('figurine-single-status');
if (!userId) {
showNotification('Enter a user ID', 'error');
return;
}
console.log(`📨 Figurines: Sending to single user ${userId}, tweet: ${tweetUrl || 'random'}`);
statusDiv.textContent = 'Sending...';
statusDiv.style.color = evilMode ? '#ff4444' : '#007bff';
const formData = new FormData();
formData.append('user_id', userId);
if (tweetUrl) {
formData.append('tweet_url', tweetUrl);
}
const res = await fetch('/figurines/send_to_user', {
method: 'POST',
body: formData
});
const data = await res.json();
console.log('📨 Figurines: Send to single user response:', data);
if (data.status === 'ok') {
showNotification(`Figurine DM queued for user ${userId}`);
statusDiv.textContent = 'Queued successfully';
statusDiv.style.color = '#28a745';
document.getElementById('figurine-single-user-id').value = ''; // Clear inputs
document.getElementById('figurine-tweet-url-single').value = '';
} else {
showNotification(data.message || 'Failed to queue DM', 'error');
statusDiv.textContent = 'Failed: ' + (data.message || 'Unknown error');
statusDiv.style.color = '#dc3545';
}
} catch (e) {
console.error('❌ Figurines: Failed to queue figurine DM for single user:', e);
showNotification('Failed to queue figurine DM', 'error');
document.getElementById('figurine-single-status').textContent = 'Error: ' + e.message;
document.getElementById('figurine-single-status').style.color = '#dc3545';
}
}
// Keep the old function for backward compatibility
async function sendFigurineNow() {
return sendFigurineNowToAll();
}
async function addServer() {
// Don't use parseInt() for Discord IDs - they're too large for JS integers
const guildId = document.getElementById('new-guild-id').value.trim();
const guildName = document.getElementById('new-guild-name').value;
const autonomousChannelId = document.getElementById('new-autonomous-channel-id').value.trim();
const autonomousChannelName = document.getElementById('new-autonomous-channel-name').value;
const bedtimeChannelIds = document.getElementById('new-bedtime-channel-ids').value
.split(',').map(id => id.trim()).filter(id => id.length > 0);
const enabledFeatures = [];
if (document.getElementById('feature-autonomous').checked) enabledFeatures.push('autonomous');
if (document.getElementById('feature-bedtime').checked) enabledFeatures.push('bedtime');
if (document.getElementById('feature-monday-video').checked) enabledFeatures.push('monday_video');
if (!guildId || !guildName || !autonomousChannelId || !autonomousChannelName) {
showNotification('Please fill in all required fields', 'error');
return;
}
try {
await apiCall('/servers', 'POST', {
guild_id: guildId,
guild_name: guildName,
autonomous_channel_id: autonomousChannelId,
autonomous_channel_name: autonomousChannelName,
bedtime_channel_ids: bedtimeChannelIds.length > 0 ? bedtimeChannelIds : [autonomousChannelId],
enabled_features: enabledFeatures
});
showNotification('Server added successfully');
loadServers();
// Clear form
document.getElementById('new-guild-id').value = '';
document.getElementById('new-guild-name').value = '';
document.getElementById('new-autonomous-channel-id').value = '';
document.getElementById('new-autonomous-channel-name').value = '';
document.getElementById('new-bedtime-channel-ids').value = '';
} catch (error) {
console.error('Failed to add server:', error);
}
}
async function removeServer(guildId) {
if (!confirm('Are you sure you want to remove this server?')) {
return;
}
try {
await apiCall(`/servers/${guildId}`, 'DELETE');
showNotification('Server removed successfully');
loadServers();
} catch (error) {
console.error('Failed to remove server:', error);
}
}
async function editServer(guildId) {
// For now, just show a notification - you can implement a full edit form later
showNotification('Edit functionality coming soon!');
}
async function repairConfig() {
if (!confirm('This will attempt to repair corrupted server configurations. Are you sure?')) {
return;
}
try {
await apiCall('/servers/repair', 'POST');
showNotification('Configuration repair initiated. Please refresh the page to see updated server list.');
loadServers(); // Reload servers to reflect potential changes
} catch (error) {
console.error('Failed to repair config:', error);
showNotification(error.message || 'Failed to repair configuration', 'error');
}
}
// Populate mood dropdowns with available moods
async function populateMoodDropdowns() {
try {
console.log('🎭 Loading available moods...');
const data = await apiCall('/moods/available');
console.log('🎭 Available moods response:', data);
if (data.moods) {
console.log(`🎭 Found ${data.moods.length} moods:`, data.moods);
const emojiMap = evilMode ? EVIL_MOOD_EMOJIS : MOOD_EMOJIS;
// Populate the DM mood dropdown (#mood on tab1)
const dmMoodSelect = document.getElementById('mood');
if (dmMoodSelect) {
dmMoodSelect.innerHTML = '';
data.moods.forEach(mood => {
const opt = document.createElement('option');
opt.value = mood;
opt.textContent = `${emojiMap[mood] || ''} ${mood}`.trim();
if (mood === 'neutral') opt.selected = true;
dmMoodSelect.appendChild(opt);
});
}
// Populate the chat mood dropdown (#chat-mood-select on tab7)
const chatMoodSelect = document.getElementById('chat-mood-select');
if (chatMoodSelect) {
chatMoodSelect.innerHTML = '';
data.moods.forEach(mood => {
const opt = document.createElement('option');
opt.value = mood;
opt.textContent = `${emojiMap[mood] || ''} ${mood}`.trim();
if (mood === 'neutral') opt.selected = true;
chatMoodSelect.appendChild(opt);
});
}
// Populate per-server mood dropdowns (mood-select-{guildId})
document.querySelectorAll('[id^="mood-select-"]').forEach(select => {
// Keep only the first option ("Select Mood...")
while (select.children.length > 1) {
select.removeChild(select.lastChild);
}
});
data.moods.forEach(mood => {
const moodOption = document.createElement('option');
moodOption.value = mood;
moodOption.textContent = `${mood} ${emojiMap[mood] || ''}`;
document.querySelectorAll('[id^="mood-select-"]').forEach(select => {
select.appendChild(moodOption.cloneNode(true));
});
});
console.log('🎭 All mood dropdowns populated successfully');
} else {
console.warn('🎭 No moods found in response');
}
} catch (error) {
console.error('🎭 Failed to load available moods:', error);
}
}
// Per-Server Mood Management
async function setServerMood(guildId) {
console.log(`🎭 setServerMood called with guildId: ${guildId} (type: ${typeof guildId})`);
// Ensure guildId is a string for consistency
const guildIdStr = String(guildId);
console.log(`🎭 Using guildId as string: ${guildIdStr}`);
// Debug: Check what elements exist
const elementId = `mood-select-${guildIdStr}`;
console.log(`🎭 Looking for element with ID: ${elementId}`);
const moodSelect = document.getElementById(elementId);
console.log(`🎭 Found element:`, moodSelect);
if (!moodSelect) {
console.error(`🎭 ERROR: Element with ID '${elementId}' not found!`);
console.log(`🎭 Available mood-select elements:`, document.querySelectorAll('[id^="mood-select-"]'));
showNotification(`Error: Mood selector not found for server ${guildIdStr}`, 'error');
return;
}
const selectedMood = moodSelect.value;
console.log(`🎭 Setting mood for server ${guildIdStr} to ${selectedMood}`);
if (!selectedMood) {
showNotification('Please select a mood', 'error');
return;
}
// Get the button and store original text before any changes
const button = moodSelect.nextElementSibling;
const originalText = button.textContent;
try {
// Show loading state
button.textContent = 'Changing...';
button.disabled = true;
console.log(`🎭 Making API call to /servers/${guildIdStr}/mood with mood: ${selectedMood}`);
const response = await apiCall(`/servers/${guildIdStr}/mood`, 'POST', { mood: selectedMood });
console.log(`🎭 API response:`, response);
if (response.status === 'ok') {
showNotification(`Server mood changed to ${selectedMood} ${MOOD_EMOJIS[selectedMood] || ''}`);
// Reset dropdown selection
moodSelect.value = '';
// Reload servers to show updated mood
loadServers();
} else {
showNotification(`Failed to change mood: ${response.message}`, 'error');
}
} catch (error) {
console.error(`🎭 Error setting mood:`, error);
showNotification(`Failed to change mood: ${error}`, 'error');
} finally {
// Restore button state
button.textContent = originalText;
button.disabled = false;
}
}
async function resetServerMood(guildId) {
console.log(`🎭 resetServerMood called with guildId: ${guildId} (type: ${typeof guildId})`);
// Ensure guildId is a string for consistency
const guildIdStr = String(guildId);
console.log(`🎭 Using guildId as string: ${guildIdStr}`);
const button = document.querySelector(`button[onclick="resetServerMood('${guildIdStr}')"]`);
const originalText = button ? button.textContent : 'Reset';
try {
// Show loading state
if (button) {
button.textContent = 'Resetting...';
button.disabled = true;
}
await apiCall(`/servers/${guildIdStr}/mood/reset`, 'POST');
showNotification(`Server mood reset to neutral`);
// Reload servers to show updated mood
loadServers();
} catch (error) {
showNotification(`Failed to reset mood: ${error}`, 'error');
} finally {
// Restore button state
if (button) {
button.textContent = originalText;
button.disabled = false;
}
}
}
async function updateBedtimeRange(guildId) {
console.log(`⏰ updateBedtimeRange called with guildId: ${guildId}`);
// Ensure guildId is a string for consistency
const guildIdStr = String(guildId);
// Get the time values from the inputs
const startTimeInput = document.getElementById(`bedtime-start-${guildIdStr}`);
const endTimeInput = document.getElementById(`bedtime-end-${guildIdStr}`);
if (!startTimeInput || !endTimeInput) {
showNotification('Could not find bedtime time inputs', 'error');
return;
}
const startTime = startTimeInput.value; // Format: "HH:MM"
const endTime = endTimeInput.value; // Format: "HH:MM"
if (!startTime || !endTime) {
showNotification('Please enter both start and end times', 'error');
return;
}
// Parse the times
const [startHour, startMinute] = startTime.split(':').map(Number);
const [endHour, endMinute] = endTime.split(':').map(Number);
const button = document.querySelector(`button[onclick="updateBedtimeRange('${guildIdStr}')"]`);
const originalText = button ? button.textContent : 'Update Bedtime Range';
try {
// Show loading state
if (button) {
button.textContent = 'Updating...';
button.disabled = true;
}
// Send the update request
await apiCall(`/servers/${guildIdStr}/bedtime-range`, 'POST', {
bedtime_hour: startHour,
bedtime_minute: startMinute,
bedtime_hour_end: endHour,
bedtime_minute_end: endMinute
});
showNotification(`Bedtime range updated: ${startTime} - ${endTime}`);
// Reload servers to show updated configuration
loadServers();
} catch (error) {
console.error('Failed to update bedtime range:', error);
} finally {
// Restore button state
if (button) {
button.textContent = originalText;
button.disabled = false;
}
}
}
// Mood Management
async function setMood() {
const mood = document.getElementById('mood').value;
try {
// Use different endpoint for evil mode
const endpoint = evilMode ? '/evil-mode/mood' : '/mood';
await apiCall(endpoint, 'POST', { mood: mood });
showNotification(`Mood set to ${mood}`);
currentMood = mood;
} catch (error) {
console.error('Failed to set mood:', error);
}
}
async function resetMood() {
try {
if (evilMode) {
await apiCall('/evil-mode/mood', 'POST', { mood: 'evil_neutral' });
showNotification('Evil mood reset to evil_neutral');
currentMood = 'evil_neutral';
document.getElementById('mood').value = 'evil_neutral';
} else {
await apiCall('/mood/reset', 'POST');
showNotification('Mood reset to neutral');
currentMood = 'neutral';
document.getElementById('mood').value = 'neutral';
}
} catch (error) {
console.error('Failed to reset mood:', error);
}
}
async function calmMiku() {
try {
if (evilMode) {
await apiCall('/evil-mode/mood', 'POST', { mood: 'evil_neutral' });
showNotification('Evil Miku has been calmed down');
currentMood = 'evil_neutral';
document.getElementById('mood').value = 'evil_neutral';
} else {
await apiCall('/mood/calm', 'POST');
showNotification('Miku has been calmed down');
}
} catch (error) {
console.error('Failed to calm Miku:', error);
}
}
// ===== Language Mode Functions =====
async function refreshLanguageStatus() {
try {
const result = await apiCall('/language');
document.getElementById('current-language-display').textContent =
result.language_mode === 'japanese' ? '日本語 (Japanese)' : 'English';
document.getElementById('status-language').textContent =
result.language_mode === 'japanese' ? '日本語 (Japanese)' : 'English';
document.getElementById('status-model').textContent = result.current_model;
console.log('Language status:', result);
} catch (error) {
console.error('Failed to get language status:', error);
showNotification('Failed to load language status', 'error');
}
}
async function toggleLanguageMode() {
try {
const result = await apiCall('/language/toggle', 'POST');
// Update UI
document.getElementById('current-language-display').textContent =
result.language_mode === 'japanese' ? '日本語 (Japanese)' : 'English';
document.getElementById('status-language').textContent =
result.language_mode === 'japanese' ? '日本語 (Japanese)' : 'English';
document.getElementById('status-model').textContent = result.model_now_using;
// Show notification
showNotification(result.message, 'success');
console.log('Language toggled:', result);
} catch (error) {
console.error('Failed to toggle language mode:', error);
showNotification('Failed to toggle language mode', 'error');
}
}

524
bot/static/js/status.js Normal file
View File

@@ -0,0 +1,524 @@
// ============================================================================
// Miku Control Panel — Status Module
// Status display, last prompt, autonomous stats
// ============================================================================
// ===== Status =====
async function loadStatus() {
try {
const result = await apiCall('/status');
const statusDiv = document.getElementById('status');
if (result.evil_mode !== undefined && result.evil_mode !== evilMode) {
evilMode = result.evil_mode;
updateEvilModeUI();
if (evilMode && result.mood) {
const moodSelect = document.getElementById('mood');
if (moodSelect) moodSelect.value = result.mood;
}
}
if (result.mood) {
const moodSelect = document.getElementById('mood');
if (moodSelect && moodSelect.querySelector(`option[value="${result.mood}"]`)) {
moodSelect.value = result.mood;
}
currentMood = result.mood;
}
let serverMoodsHtml = '';
if (result.server_moods) {
serverMoodsHtml = '<div style="margin-top: 0.5rem;"><strong>Server Moods:</strong><br>';
for (const [guildId, mood] of Object.entries(result.server_moods)) {
const server = servers.find(s => s.guild_id == guildId);
const serverName = server ? server.guild_name : `Server ${guildId}`;
const emojiMap = evilMode ? EVIL_MOOD_EMOJIS : MOOD_EMOJIS;
serverMoodsHtml += `${serverName}: ${mood} ${emojiMap[mood] || ''}<br>`;
}
serverMoodsHtml += '</div>';
}
const moodEmoji = evilMode ? (EVIL_MOOD_EMOJIS[result.mood] || '') : (MOOD_EMOJIS[result.mood] || '');
const moodLabel = evilMode ? `😈 ${result.mood} ${moodEmoji}` : `${result.mood} ${moodEmoji}`;
statusDiv.innerHTML = `
<div><strong>Status:</strong> ${result.status}</div>
<div><strong>DM Mood:</strong> ${moodLabel}</div>
<div><strong>Servers:</strong> ${result.servers}</div>
<div><strong>Active Schedulers:</strong> ${result.active_schedulers}</div>
<div style="margin-top: 0.5rem; padding: 0.5rem; background: #2a2a2a; border-radius: 4px; font-size: 0.9rem;">
<strong>💬 DM Support:</strong> Users can message Miku directly in DMs. She responds to every DM message using the DM mood (auto-rotating every 2 hours).
</div>
${serverMoodsHtml}
`;
} catch (error) {
console.error('Failed to load status:', error);
}
}
// ===== Prompt History =====
let _promptHistoryCache = []; // cached history entries from last fetch
let _selectedPromptId = null; // currently selected entry ID
let _middleTruncation = false; // whether middle-truncation is active
async function loadPromptHistory() {
const source = localStorage.getItem('miku-prompt-source') || 'all';
const selectEl = document.getElementById('prompt-history-select');
try {
const url = source === 'all' ? '/prompts' : `/prompts?source=${source}`;
const result = await apiCall(url);
_promptHistoryCache = result.history || [];
// Populate dropdown
const currentValue = selectEl.value;
selectEl.innerHTML = '';
if (_promptHistoryCache.length === 0) {
selectEl.innerHTML = '<option value="">-- No prompts yet --</option>';
} else {
_promptHistoryCache.forEach(entry => {
const ts = entry.timestamp ? new Date(entry.timestamp).toLocaleTimeString() : '?';
const srcLabel = entry.source === 'cat' ? '🐱' : '🤖';
const user = entry.user || '?';
const option = document.createElement('option');
option.value = entry.id;
option.textContent = `${srcLabel} #${entry.id}${user}${ts}`;
selectEl.appendChild(option);
});
}
// Restore or auto-select the latest entry
if (_selectedPromptId && _promptHistoryCache.some(e => e.id === _selectedPromptId)) {
selectEl.value = _selectedPromptId;
} else if (_promptHistoryCache.length > 0) {
selectEl.value = _promptHistoryCache[0].id;
}
if (selectEl.value) {
await selectPromptEntry(selectEl.value);
} else {
clearPromptDisplay();
}
} catch (error) {
console.error('Failed to load prompt history:', error);
}
}
async function selectPromptEntry(promptId) {
if (!promptId) {
clearPromptDisplay();
return;
}
_selectedPromptId = parseInt(promptId);
// Try cache first
let entry = _promptHistoryCache.find(e => e.id === _selectedPromptId);
// Fall back to API call if not in cache
if (!entry) {
try {
entry = await apiCall(`/prompts/${_selectedPromptId}`);
} catch (error) {
console.error('Failed to load prompt entry:', error);
clearPromptDisplay();
return;
}
}
if (!entry) {
clearPromptDisplay();
return;
}
renderPromptEntry(entry);
}
function clearPromptDisplay() {
document.getElementById('prompt-metadata').innerHTML = '';
document.getElementById('prompt-display').innerHTML = '<pre style="white-space: pre-wrap; word-break: break-word; background: #1a1a1a; padding: 0.75rem; border-radius: 4px; font-size: 0.8rem; line-height: 1.4; margin: 0; color: #666;">No prompt selected.</pre>';
document.getElementById('last-prompt').textContent = '';
}
function renderPromptEntry(entry) {
// Metadata bar
const metaEl = document.getElementById('prompt-metadata');
const ts = entry.timestamp ? new Date(entry.timestamp).toLocaleString() : '?';
const sourceIcon = entry.source === 'cat' ? '🐱 Cat' : '🤖 Fallback';
metaEl.innerHTML = `
<span><span class="prompt-meta-label">#</span><span class="prompt-meta-value">${entry.id}</span></span>
<span><span class="prompt-meta-label">Source:</span> <span class="prompt-meta-value">${sourceIcon}</span></span>
<span><span class="prompt-meta-label">User:</span> <span class="prompt-meta-value">${escapeHtml(entry.user || '?')}</span></span>
<span><span class="prompt-meta-label">Mood:</span> <span class="prompt-meta-value">${escapeHtml(entry.mood || '?')}</span></span>
<span><span class="prompt-meta-label">Guild:</span> <span class="prompt-meta-value">${escapeHtml(entry.guild || '?')}</span></span>
<span><span class="prompt-meta-label">Channel:</span> <span class="prompt-meta-value">${escapeHtml(entry.channel || '?')}</span></span>
<span><span class="prompt-meta-label">Model:</span> <span class="prompt-meta-value">${escapeHtml(entry.model || '?')}</span></span>
<span><span class="prompt-meta-label">Type:</span> <span class="prompt-meta-value">${escapeHtml(entry.response_type || '?')}</span></span>
<span><span class="prompt-meta-label">Time:</span> <span class="prompt-meta-value">${ts}</span></span>
`;
// Parse full_prompt into sections
const sections = parsePromptSections(entry.full_prompt || '');
// Snapshot which subsections are currently collapsed (before re-render)
const sectionIds = ['system', 'context', 'conversation', 'response'];
const collapsedState = {};
sectionIds.forEach(id => {
const el = document.getElementById(`prompt-section-${id}`);
collapsedState[id] = el && el.classList.contains('collapsed');
});
// Build display HTML with collapsible subsections
let displayHtml = '';
if (sections.system) {
displayHtml += buildCollapsibleSection('System Prompt', sections.system, 'system');
}
if (sections.context) {
displayHtml += buildCollapsibleSection('Context (Memories & Tools)', sections.context, 'context');
}
if (sections.conversation) {
displayHtml += buildCollapsibleSection('Conversation', sections.conversation, 'conversation');
}
if (!sections.system && !sections.context && !sections.conversation) {
// Fallback: show raw full_prompt
displayHtml += `<pre style="white-space: pre-wrap; word-break: break-word; margin: 0;">${escapeHtml(entry.full_prompt || '')}</pre>`;
}
// Response section
if (entry.response) {
let responseText = entry.response;
if (_middleTruncation && responseText.length > 400) {
responseText = responseText.substring(0, 200) + '\n\n... [truncated middle] ...\n\n' + responseText.substring(responseText.length - 200);
}
displayHtml += buildCollapsibleSection('Response', responseText, 'response');
}
// Render into the prompt-display div (using innerHTML for collapsible structure)
const displayEl = document.getElementById('prompt-display');
displayEl.innerHTML = displayHtml;
// Restore collapsed state from snapshot
sectionIds.forEach(id => {
const el = document.getElementById(`prompt-section-${id}`);
if (el && collapsedState[id]) {
el.classList.add('collapsed');
const header = el.previousElementSibling;
if (header) header.innerHTML = header.innerHTML.replace('▼', '▶');
}
});
// Also set the raw text into the <pre> for copy functionality
let rawText = entry.full_prompt || '';
if (entry.response) {
rawText += `\n\n${'═'.repeat(60)}\n[Response]\n${entry.response}`;
}
document.getElementById('last-prompt').textContent = rawText;
}
function parsePromptSections(fullPrompt) {
const sections = { system: null, context: null, conversation: null };
if (!fullPrompt) return sections;
// Try to split on known section markers
const contextMatch = fullPrompt.match(/# Context\s*\n([\s\S]*?)(?=\n# Conversation|\nHuman:|\n$)/);
const convMatch = fullPrompt.match(/# Conversation until now:\s*\n([\s\S]*)/);
if (contextMatch) {
// Everything before # Context is the system prompt
const contextIdx = fullPrompt.indexOf('# Context');
if (contextIdx > 0) {
sections.system = fullPrompt.substring(0, contextIdx).trim();
}
sections.context = contextMatch[1].trim();
}
if (convMatch) {
sections.conversation = convMatch[1].trim();
} else {
// Try alternative: "Human:" at the end
const humanMatch = fullPrompt.match(/\nHuman:([\s\S]*)/);
if (humanMatch && fullPrompt.indexOf('Human:') > fullPrompt.indexOf('# Context')) {
sections.conversation = 'Human:' + humanMatch[1].trim();
}
}
// If no # Context marker, try "System:" prefix (fallback prompts)
if (!sections.system && !sections.context) {
const sysMatch = fullPrompt.match(/^System:\s*([\s\S]*?)(?=\nMessages:)/);
const msgMatch = fullPrompt.match(/Messages:\s*([\s\S]*)/);
if (sysMatch) {
sections.system = sysMatch[1].trim();
}
if (msgMatch) {
sections.conversation = msgMatch[1].trim();
}
}
return sections;
}
function buildCollapsibleSection(title, content, sectionId) {
const id = `prompt-section-${sectionId}`;
return `
<div class="prompt-subsection-header" onclick="togglePromptSubsection('${id}')">
${escapeHtml(title)}
</div>
<div class="prompt-subsection-body" id="${id}">
<pre style="white-space: pre-wrap; word-break: break-word; background: #1a1a1a; padding: 0.5rem; border-radius: 4px; font-size: 0.8rem; line-height: 1.4; margin: 0.25rem 0;">${escapeHtml(content)}</pre>
</div>`;
}
function togglePromptSubsection(id) {
const body = document.getElementById(id);
if (!body) return;
const header = body.previousElementSibling;
if (body.classList.contains('collapsed')) {
body.classList.remove('collapsed');
if (header) header.innerHTML = header.innerHTML.replace('▶', '▼');
} else {
body.classList.add('collapsed');
if (header) header.innerHTML = header.innerHTML.replace('▼', '▶');
}
}
function togglePromptHistoryCollapse() {
const section = document.getElementById('prompt-history-section');
const toggle = document.getElementById('prompt-history-toggle');
if (section.classList.contains('collapsed')) {
section.classList.remove('collapsed');
toggle.textContent = '▼ Prompt History';
} else {
section.classList.add('collapsed');
toggle.textContent = '▶ Prompt History';
}
}
function copyPromptToClipboard() {
const rawText = document.getElementById('last-prompt').textContent;
if (!rawText) return;
navigator.clipboard.writeText(rawText).then(() => {
showNotification('Prompt copied to clipboard', 'success');
}).catch(err => {
console.error('Failed to copy:', err);
showNotification('Failed to copy', 'error');
});
}
function toggleMiddleTruncation() {
_middleTruncation = document.getElementById('prompt-truncate-toggle').checked;
// Re-render current entry
if (_selectedPromptId) {
selectPromptEntry(_selectedPromptId);
}
}
// Legacy compatibility — called from core.js on page load / tab switch
// Redirects to the new loadPromptHistory()
async function loadLastPrompt() {
await loadPromptHistory();
}
// ===== Autonomous Stats =====
async function loadAutonomousStats() {
const serverSelect = document.getElementById('autonomous-server-select');
const selectedGuildId = serverSelect.value;
if (!selectedGuildId) {
document.getElementById('autonomous-stats-display').innerHTML = '<p style="color: #aaa;">Please select a server to view autonomous stats.</p>';
return;
}
try {
const data = await apiCall('/autonomous/stats');
if (!data.servers || !data.servers[selectedGuildId]) {
document.getElementById('autonomous-stats-display').innerHTML = '<p style="color: #ff5555;">Server not found or not initialized.</p>';
return;
}
const serverData = data.servers[selectedGuildId];
displayAutonomousStats(serverData);
} catch (error) {
console.error('Failed to load autonomous stats:', error);
}
}
function displayAutonomousStats(data) {
const container = document.getElementById('autonomous-stats-display');
if (!data.context) {
container.innerHTML = `
<div style="background: #2a2a2a; padding: 1.5rem; border-radius: 8px;">
<h4 style="color: #61dafb; margin-top: 0;">⚠️ Context Not Initialized</h4>
<p>This server hasn't had any activity yet. Context tracking will begin once messages are sent.</p>
<div style="margin-top: 1rem; padding: 1rem; background: #1e1e1e; border-radius: 4px;">
<strong>Current Mood:</strong> ${data.mood} ${MOOD_EMOJIS[data.mood] || ''}<br>
<strong>Energy:</strong> ${data.mood_profile.energy}<br>
<strong>Sociability:</strong> ${data.mood_profile.sociability}<br>
<strong>Impulsiveness:</strong> ${data.mood_profile.impulsiveness}
</div>
</div>
`;
return;
}
const ctx = data.context;
const profile = data.mood_profile;
const lastActionMin = Math.floor(ctx.time_since_last_action / 60);
const lastInteractionMin = Math.floor(ctx.time_since_last_interaction / 60);
container.innerHTML = `
<div style="background: #2a2a2a; padding: 1.5rem; border-radius: 8px; margin-bottom: 1rem;">
<h4 style="color: #61dafb; margin-top: 0;">🎭 Mood & Personality Profile</h4>
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 1rem;">
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa; margin-bottom: 0.3rem;">Current Mood</div>
<div style="font-size: 1.5rem; font-weight: bold;">${data.mood} ${MOOD_EMOJIS[data.mood] || ''}</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa; margin-bottom: 0.3rem;">Energy Level</div>
<div style="font-size: 1.5rem; font-weight: bold; color: ${getStatColor(profile.energy)}">${(profile.energy * 100).toFixed(0)}%</div>
<div style="width: 100%; height: 6px; background: #333; border-radius: 3px; margin-top: 0.5rem;">
<div style="width: ${profile.energy * 100}%; height: 100%; background: ${getStatColor(profile.energy)}; border-radius: 3px;"></div>
</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa; margin-bottom: 0.3rem;">Sociability</div>
<div style="font-size: 1.5rem; font-weight: bold; color: ${getStatColor(profile.sociability)}">${(profile.sociability * 100).toFixed(0)}%</div>
<div style="width: 100%; height: 6px; background: #333; border-radius: 3px; margin-top: 0.5rem;">
<div style="width: ${profile.sociability * 100}%; height: 100%; background: ${getStatColor(profile.sociability)}; border-radius: 3px;"></div>
</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa; margin-bottom: 0.3rem;">Impulsiveness</div>
<div style="font-size: 1.5rem; font-weight: bold; color: ${getStatColor(profile.impulsiveness)}">${(profile.impulsiveness * 100).toFixed(0)}%</div>
<div style="width: 100%; height: 6px; background: #333; border-radius: 3px; margin-top: 0.5rem;">
<div style="width: ${profile.impulsiveness * 100}%; height: 100%; background: ${getStatColor(profile.impulsiveness)}; border-radius: 3px;"></div>
</div>
</div>
</div>
</div>
<div style="background: #2a2a2a; padding: 1.5rem; border-radius: 8px; margin-bottom: 1rem;">
<h4 style="color: #61dafb; margin-top: 0;">📈 Activity Metrics</h4>
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 1rem;">
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Messages (Last 5 min) <span style="color: #666;">⚡ ephemeral</span></div>
<div style="font-size: 1.8rem; font-weight: bold; color: #4caf50;">${ctx.messages_last_5min}</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Messages (Last Hour) <span style="color: #666;">⚡ ephemeral</span></div>
<div style="font-size: 1.8rem; font-weight: bold; color: #2196f3;">${ctx.messages_last_hour}</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Conversation Momentum <span style="color: #4caf50;">💾 saved</span></div>
<div style="font-size: 1.8rem; font-weight: bold; color: ${getMomentumColor(ctx.conversation_momentum)}">${(ctx.conversation_momentum * 100).toFixed(0)}%</div>
<div style="font-size: 0.75rem; color: #888; margin-top: 0.3rem;">Decays with downtime (half-life: 10min)</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Unique Users Active <span style="color: #666;">⚡ ephemeral</span></div>
<div style="font-size: 1.8rem; font-weight: bold; color: #ff9800;">${ctx.unique_users_active}</div>
</div>
</div>
</div>
<div style="background: #2a2a2a; padding: 1.5rem; border-radius: 8px; margin-bottom: 1rem;">
<h4 style="color: #61dafb; margin-top: 0;">👥 User Events</h4>
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 1rem;">
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Users Joined Recently</div>
<div style="font-size: 1.8rem; font-weight: bold; color: #4caf50;">${ctx.users_joined_recently}</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Status Changes</div>
<div style="font-size: 1.8rem; font-weight: bold; color: #2196f3;">${ctx.users_status_changed}</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Active Activities</div>
<div style="font-size: 1.8rem; font-weight: bold; color: #9c27b0;">${ctx.users_started_activity.length}</div>
${ctx.users_started_activity.length > 0 ? `<div style="font-size: 0.8rem; margin-top: 0.5rem; color: #aaa;">${ctx.users_started_activity.join(', ')}</div>` : ''}
</div>
</div>
</div>
<div style="background: #2a2a2a; padding: 1.5rem; border-radius: 8px; margin-bottom: 1rem;">
<h4 style="color: #61dafb; margin-top: 0;">⏱️ Timing & Context</h4>
<div style="display: grid; grid-template-columns: repeat(auto-fit, minmax(200px, 1fr)); gap: 1rem;">
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Time Since Last Action <span style="color: #4caf50;">💾 saved</span></div>
<div style="font-size: 1.8rem; font-weight: bold; color: #ff5722;">${lastActionMin} min</div>
<div style="font-size: 0.8rem; color: #888;">${ctx.time_since_last_action.toFixed(1)}s</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Time Since Last Interaction <span style="color: #4caf50;">💾 saved</span></div>
<div style="font-size: 1.8rem; font-weight: bold; color: #ff9800;">${lastInteractionMin} min</div>
<div style="font-size: 0.8rem; color: #888;">${ctx.time_since_last_interaction.toFixed(1)}s</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Messages Since Last Appearance <span style="color: #4caf50;">💾 saved</span></div>
<div style="font-size: 1.8rem; font-weight: bold; color: #2196f3;">${ctx.messages_since_last_appearance}</div>
</div>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa;">Current Time Context <span style="color: #666;">⚡ ephemeral</span></div>
<div style="font-size: 1.5rem; font-weight: bold; color: #61dafb;">${ctx.hour_of_day}:00</div>
<div style="font-size: 0.8rem; color: #888;">${ctx.is_weekend ? '📅 Weekend' : '📆 Weekday'}</div>
</div>
</div>
</div>
<div style="background: #2a2a2a; padding: 1.5rem; border-radius: 8px;">
<h4 style="color: #61dafb; margin-top: 0;">🧠 Base Energy Level</h4>
<div style="background: #1e1e1e; padding: 1rem; border-radius: 4px;">
<div style="font-size: 0.9rem; color: #aaa; margin-bottom: 0.5rem;">From current mood personality</div>
<div style="font-size: 2rem; font-weight: bold; color: ${getStatColor(ctx.mood_energy_level)}">${(ctx.mood_energy_level * 100).toFixed(0)}%</div>
<div style="width: 100%; height: 10px; background: #333; border-radius: 5px; margin-top: 0.5rem;">
<div style="width: ${ctx.mood_energy_level * 100}%; height: 100%; background: ${getStatColor(ctx.mood_energy_level)}; border-radius: 5px;"></div>
</div>
<div style="font-size: 0.85rem; color: #888; margin-top: 0.5rem;">
💡 Combined with activity metrics to determine action likelihood.<br>
📝 High energy = shorter wait times, higher action chance.<br>
💾 <strong>Persisted across restarts</strong>
</div>
</div>
</div>
`;
}
function getStatColor(value) {
if (value >= 0.8) return '#4caf50';
if (value >= 0.6) return '#8bc34a';
if (value >= 0.4) return '#ffc107';
if (value >= 0.2) return '#ff9800';
return '#f44336';
}
function getMomentumColor(value) {
if (value >= 0.7) return '#4caf50';
if (value >= 0.4) return '#2196f3';
return '#9e9e9e';
}
function populateAutonomousServerDropdown() {
const select = document.getElementById('autonomous-server-select');
if (!select) return;
const currentValue = select.value;
select.innerHTML = '<option value="">-- Select a server --</option>';
servers.forEach(server => {
const option = document.createElement('option');
option.value = server.guild_id;
option.textContent = `${server.guild_name} (${server.guild_id})`;
select.appendChild(option);
});
if (currentValue && servers.some(s => String(s.guild_id) === currentValue)) {
select.value = currentValue;
}
}

View File

@@ -12,6 +12,8 @@ Supports 5 activity types: listening, playing, watching, competing, streaming.
import os import os
import random import random
import tempfile
import threading
import time import time
import yaml import yaml
import discord import discord
@@ -22,6 +24,9 @@ logger = get_logger('activity')
ACTIVITIES_FILE = os.path.join(os.path.dirname(os.path.dirname(__file__)), "activities.yaml") ACTIVITIES_FILE = os.path.join(os.path.dirname(os.path.dirname(__file__)), "activities.yaml")
# Discord activity name character limit
DISCORD_ACTIVITY_NAME_MAX = 128
# All valid activity types # All valid activity types
VALID_ACTIVITY_TYPES = {"listening", "playing", "watching", "competing", "streaming"} VALID_ACTIVITY_TYPES = {"listening", "playing", "watching", "competing", "streaming"}
@@ -56,6 +61,9 @@ ACTIVITY_PROBABILITY = {
"manic": 0.85, "manic": 0.85,
} }
# ── Thread lock for all shared mutable state ──
_state_lock = threading.Lock()
# ── Manual override state ── # ── Manual override state ──
_manual_override = False _manual_override = False
_manual_override_until = 0.0 # Unix timestamp; 0 = no override _manual_override_until = 0.0 # Unix timestamp; 0 = no override
@@ -74,44 +82,64 @@ _cache_mtime = 0.0
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
def _load_activities(force=False): def _load_activities(force=False):
"""Load activities.yaml with file-mtime-based caching.""" """Load activities.yaml with file-mtime-based caching. Returns a deep copy."""
global _activities_cache, _cache_mtime global _activities_cache, _cache_mtime
try: with _state_lock:
mtime = os.path.getmtime(ACTIVITIES_FILE) try:
except OSError: mtime = os.path.getmtime(ACTIVITIES_FILE)
logger.warning(f"Activities file not found: {ACTIVITIES_FILE}") except OSError:
return {"normal": {}, "evil": {}} logger.warning(f"Activities file not found: {ACTIVITIES_FILE}")
return {"normal": {}, "evil": {}}
if not force and _activities_cache is not None and mtime == _cache_mtime: if not force and _activities_cache is not None and mtime == _cache_mtime:
return _activities_cache # Return a deep copy so callers cannot mutate the cache
import copy
return copy.deepcopy(_activities_cache)
try: try:
with open(ACTIVITIES_FILE, "r", encoding="utf-8") as f: with open(ACTIVITIES_FILE, "r", encoding="utf-8") as f:
data = yaml.safe_load(f) or {} data = yaml.safe_load(f) or {}
_activities_cache = data _activities_cache = data
_cache_mtime = mtime _cache_mtime = mtime
logger.debug(f"Loaded activities from {ACTIVITIES_FILE}") logger.debug(f"Loaded activities from {ACTIVITIES_FILE}")
return data import copy
except Exception as e: return copy.deepcopy(data)
logger.error(f"Failed to load activities file: {e}") except Exception as e:
return _activities_cache or {"normal": {}, "evil": {}} logger.error(f"Failed to load activities file: {e}")
if _activities_cache is not None:
import copy
return copy.deepcopy(_activities_cache)
return {"normal": {}, "evil": {}}
def save_activities(data: dict): def save_activities(data: dict):
"""Write the full activities dict back to YAML.""" """Write the full activities dict back to YAML using atomic write (temp + rename)."""
global _activities_cache, _cache_mtime global _activities_cache, _cache_mtime
try: with _state_lock:
with open(ACTIVITIES_FILE, "w", encoding="utf-8") as f: try:
yaml.dump(data, f, default_flow_style=False, allow_unicode=True, sort_keys=False) # Atomic write: write to temp file in same directory, then rename
dir_name = os.path.dirname(ACTIVITIES_FILE)
fd, tmp_path = tempfile.mkstemp(dir=dir_name, suffix=".yaml.tmp")
try:
with os.fdopen(fd, "w", encoding="utf-8") as f:
yaml.dump(data, f, default_flow_style=False, allow_unicode=True, sort_keys=False)
os.replace(tmp_path, ACTIVITIES_FILE)
except BaseException:
# Clean up temp file on failure
try:
os.unlink(tmp_path)
except OSError:
pass
raise
_activities_cache = data _activities_cache = data
_cache_mtime = os.path.getmtime(ACTIVITIES_FILE) _cache_mtime = os.path.getmtime(ACTIVITIES_FILE)
logger.info(f"Saved activities to {ACTIVITIES_FILE}") logger.info(f"Saved activities to {ACTIVITIES_FILE}")
except Exception as e: except Exception as e:
logger.error(f"Failed to save activities file: {e}") logger.error(f"Failed to save activities file: {e}")
raise raise
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
@@ -119,7 +147,7 @@ def save_activities(data: dict):
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
def get_all_activities() -> dict: def get_all_activities() -> dict:
"""Return the full activities dict (normal + evil sections).""" """Return the full activities dict (normal + evil sections). Returns a deep copy."""
return _load_activities() return _load_activities()
@@ -151,12 +179,16 @@ def set_activities_for_mood(mood_name: str, is_evil: bool, activities: list):
) )
if not entry.get("name") or not isinstance(entry["name"], str): if not entry.get("name") or not isinstance(entry["name"], str):
raise ValueError(f"Entry {i} must have a non-empty string 'name'") raise ValueError(f"Entry {i} must have a non-empty string 'name'")
if len(entry["name"]) > DISCORD_ACTIVITY_NAME_MAX:
raise ValueError(f"Entry {i} name exceeds {DISCORD_ACTIVITY_NAME_MAX} characters")
if not isinstance(entry.get("weight", 0), int) or entry.get("weight", 0) < 1: if not isinstance(entry.get("weight", 0), int) or entry.get("weight", 0) < 1:
raise ValueError(f"Entry {i} weight must be a positive integer") raise ValueError(f"Entry {i} weight must be a positive integer")
if "state" in entry and entry["state"] is not None and not isinstance(entry["state"], str): if "state" in entry and entry["state"] is not None and not isinstance(entry["state"], str):
raise ValueError(f"Entry {i} 'state' must be a string if provided") raise ValueError(f"Entry {i} 'state' must be a string if provided")
if "url" in entry and entry["url"] is not None and not isinstance(entry["url"], str): if "url" in entry and entry["url"] is not None and not isinstance(entry["url"], str):
raise ValueError(f"Entry {i} 'url' must be a string if provided") raise ValueError(f"Entry {i} 'url' must be a string if provided")
if entry.get("type") == "streaming" and not entry.get("url"):
raise ValueError(f"Entry {i} is streaming type but has no url")
section = "evil" if is_evil else "normal" section = "evil" if is_evil else "normal"
data = _load_activities() data = _load_activities()
@@ -173,18 +205,43 @@ def set_activities_for_mood(mood_name: str, is_evil: bool, activities: list):
def pick_activity_for_mood(mood_name: str, is_evil: bool = False): def pick_activity_for_mood(mood_name: str, is_evil: bool = False):
"""Pick a weighted-random activity for a mood. """Pick a weighted-random activity for a mood.
Validates entries and skips malformed ones with a warning.
Returns: Returns:
dict: {"type": ..., "name": ..., "state": ..., "url": ...} dict: {"type": ..., "name": ..., "state": ..., "url": ...}
state and url may be None. state and url may be None.
Returns None if mood has no entries. Returns None if mood has no valid entries.
""" """
activities = get_activities_for_mood(mood_name, is_evil) activities = get_activities_for_mood(mood_name, is_evil)
if not activities: if not activities:
return None return None
weights = [entry.get("weight", 1) for entry in activities] # Validate entries, skipping malformed ones
chosen = random.choices(activities, weights=weights, k=1)[0] valid = []
weights = []
for i, entry in enumerate(activities):
if not isinstance(entry, dict):
logger.warning(f"Skipping non-dict entry {i} in {'evil/' if is_evil else ''}{mood_name}")
continue
if "type" not in entry or "name" not in entry:
logger.warning(f"Skipping entry {i} missing 'type' or 'name' in {'evil/' if is_evil else ''}{mood_name}: {entry}")
continue
if entry["type"] not in VALID_ACTIVITY_TYPES:
logger.warning(f"Skipping entry {i} with unrecognized type '{entry['type']}' in {'evil/' if is_evil else ''}{mood_name}")
continue
w = entry.get("weight", 1)
if not isinstance(w, int) or w < 1:
logger.warning(f"Skipping entry {i} with invalid weight {w} in {'evil/' if is_evil else ''}{mood_name}")
continue
valid.append(entry)
weights.append(w)
if not valid:
logger.warning(f"No valid entries for {'evil/' if is_evil else ''}{mood_name}")
return None
chosen = random.choices(valid, weights=weights, k=1)[0]
return { return {
"type": chosen["type"], "type": chosen["type"],
"name": chosen["name"], "name": chosen["name"],
@@ -208,31 +265,35 @@ def should_have_activity(mood_name: str) -> bool:
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
def is_manual_override_active() -> bool: def is_manual_override_active() -> bool:
"""Check if a manual override is in effect (hasn't expired).""" """Check if a manual override is in effect (hasn't expired). Thread-safe."""
global _manual_override with _state_lock:
if not _manual_override: global _manual_override
return False if not _manual_override:
if _manual_override_until > 0 and time.time() > _manual_override_until: return False
_manual_override = False if _manual_override_until > 0 and time.time() > _manual_override_until:
logger.info("Manual override expired, returning to automatic mode") _manual_override = False
return False logger.info("Manual override expired, returning to automatic mode")
return True return False
return True
def set_manual_override(duration: int = MANUAL_OVERRIDE_DURATION): def set_manual_override(duration: int = MANUAL_OVERRIDE_DURATION):
"""Activate manual override for the given duration (seconds).""" """Activate manual override for the given duration (seconds). Thread-safe."""
global _manual_override, _manual_override_until with _state_lock:
_manual_override = True global _manual_override, _manual_override_until
_manual_override_until = time.time() + duration _manual_override = True
logger.info(f"Manual override activated for {duration}s") expiry = time.time() + duration
_manual_override_until = expiry
logger.info(f"Manual override activated for {duration}s (expires at {time.strftime('%H:%M:%S', time.localtime(expiry))})")
def clear_manual_override(): def clear_manual_override():
"""Deactivate manual override immediately.""" """Deactivate manual override immediately. Thread-safe."""
global _manual_override, _manual_override_until with _state_lock:
_manual_override = False global _manual_override, _manual_override_until
_manual_override_until = 0.0 _manual_override = False
logger.info("Manual override cleared") _manual_override_until = 0.0
logger.info("Manual override cleared")
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
@@ -240,14 +301,16 @@ def clear_manual_override():
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
def get_current_activity(): def get_current_activity():
"""Return the current activity dict or None if idle.""" """Return the current activity dict or None if idle. Thread-safe."""
return _current_activity with _state_lock:
return _current_activity
def _set_current_activity(activity_dict): def _set_current_activity(activity_dict):
"""Update the tracked current activity.""" """Update the tracked current activity. Thread-safe."""
global _current_activity global _current_activity
_current_activity = activity_dict with _state_lock:
_current_activity = activity_dict
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
@@ -255,10 +318,13 @@ def _set_current_activity(activity_dict):
# ══════════════════════════════════════════════════════════════════════════════ # ══════════════════════════════════════════════════════════════════════════════
def _build_activity(payload: dict): def _build_activity(payload: dict):
"""Build a discord.Activity (or discord.Streaming) from a payload dict.""" """Build a discord.Activity (or discord.Streaming) from a payload dict.
Logs a warning if the activity type is unrecognized (falls back to playing).
"""
atype = payload["type"] atype = payload["type"]
name = payload["name"] name = payload["name"]
state = payload.get("state") state = payload.get("state") or None # normalize empty string to None
url = payload.get("url") url = payload.get("url")
if atype == "streaming" and url: if atype == "streaming" and url:
@@ -271,8 +337,12 @@ def _build_activity(payload: dict):
"competing": discord.ActivityType.competing, "competing": discord.ActivityType.competing,
"streaming": discord.ActivityType.streaming, # fallback without url "streaming": discord.ActivityType.streaming, # fallback without url
} }
resolved_type = type_map.get(atype)
if resolved_type is None:
logger.warning(f"Unrecognized activity type '{atype}', falling back to 'playing'")
resolved_type = discord.ActivityType.playing
return discord.Activity( return discord.Activity(
type=type_map.get(atype, discord.ActivityType.playing), type=resolved_type,
name=name, name=name,
state=state, state=state,
) )
@@ -351,20 +421,22 @@ async def update_bot_presence(mood_name: str, is_evil: bool = False, force: bool
logger.info(f"Set presence: {label} (mood={'evil/' if is_evil else ''}{mood_name})") logger.info(f"Set presence: {label} (mood={'evil/' if is_evil else ''}{mood_name})")
except Exception as e: except Exception as e:
logger.error(f"Failed to update bot presence: {e}") logger.error(f"Failed to update bot presence: {e}", exc_info=True)
async def set_activity_manual(activity_type: str, name: str, state: str = None, url: str = None): async def set_activity_manual(activity_type: str, name: str, state: str = None, url: str = None):
"""Manually set the bot's activity (bypasses mood system). """Manually set the bot's activity (bypasses mood system).
Raises: Raises:
ValueError: if activity_type is invalid or streaming lacks url ValueError: if activity_type is invalid, name too long, or streaming lacks url
RuntimeError: if bot is not ready RuntimeError: if bot is not ready
""" """
if activity_type not in VALID_ACTIVITY_TYPES: if activity_type not in VALID_ACTIVITY_TYPES:
raise ValueError(f"Invalid type '{activity_type}', must be one of: {', '.join(sorted(VALID_ACTIVITY_TYPES))}") raise ValueError(f"Invalid type '{activity_type}', must be one of: {', '.join(sorted(VALID_ACTIVITY_TYPES))}")
if not name or not isinstance(name, str): if not name or not isinstance(name, str):
raise ValueError("name must be a non-empty string") raise ValueError("name must be a non-empty string")
if len(name) > DISCORD_ACTIVITY_NAME_MAX:
raise ValueError(f"name exceeds {DISCORD_ACTIVITY_NAME_MAX} characters")
if activity_type == "streaming" and not url: if activity_type == "streaming" and not url:
raise ValueError("streaming type requires a url") raise ValueError("streaming type requires a url")
@@ -401,13 +473,20 @@ async def clear_activity_manual():
async def release_manual_override(): async def release_manual_override():
"""Release manual override and immediately recalculate presence from current mood.""" """Release manual override and immediately recalculate presence from current mood.
Uses force=True so the bot always gets an activity instead of potentially
going idle right away (which would be confusing UX after clicking "Return to Auto").
"""
clear_manual_override() clear_manual_override()
if globals.EVIL_MODE: try:
mood = globals.EVIL_DM_MOOD if globals.EVIL_MODE:
is_evil = True mood = globals.EVIL_DM_MOOD
else: is_evil = True
mood = globals.DM_MOOD else:
is_evil = False mood = globals.DM_MOOD
await update_bot_presence(mood, is_evil=is_evil, force=False) is_evil = False
logger.info(f"Released manual override, recalculated for mood={'evil/' if is_evil else ''}{mood}") await update_bot_presence(mood, is_evil=is_evil, force=True)
logger.info(f"Released manual override, set presence for mood={'evil/' if is_evil else ''}{mood}")
except Exception as e:
logger.error(f"Failed to recalculate presence after releasing override: {e}")

View File

@@ -23,12 +23,33 @@ logger = get_logger('persona')
BIPOLAR_STATE_FILE = "memory/bipolar_mode_state.json" BIPOLAR_STATE_FILE = "memory/bipolar_mode_state.json"
BIPOLAR_WEBHOOKS_FILE = "memory/bipolar_webhooks.json" BIPOLAR_WEBHOOKS_FILE = "memory/bipolar_webhooks.json"
BIPOLAR_SCOREBOARD_FILE = "memory/bipolar_scoreboard.json" BIPOLAR_SCOREBOARD_FILE = "memory/bipolar_scoreboard.json"
ARGUMENT_TOPICS_FILE = "memory/argument_topics.json"
# Argument settings # Argument settings
MIN_EXCHANGES = 4 # Minimum number of back-and-forth exchanges before ending can occur MIN_EXCHANGES = 4 # Minimum number of back-and-forth exchanges before ending can occur
ARGUMENT_TRIGGER_CHANCE = 0.15 # 15% chance for the other Miku to break through ARGUMENT_TRIGGER_CHANCE = 0.15 # 15% chance for the other Miku to break through
DELAY_BETWEEN_MESSAGES = (2.0, 5.0) # Random delay between argument messages (seconds) DELAY_BETWEEN_MESSAGES = (2.0, 5.0) # Random delay between argument messages (seconds)
# Argument topic rotation — each topic gives the argument a different framing
# Topics are weighted: higher weight = more likely to be selected
ARGUMENT_TOPICS = [
# (topic_name, weight, description for prompt injection)
("identity_crisis", 3, "Who is the REAL Miku? Authenticity vs. the shadow self"),
("power_dynamic", 3, "Who holds the power? Dominance, submission, and control"),
("philosophical", 2, "Is kindness strength or weakness? Does darkness serve a purpose?"),
("petty_grievance", 3, "Something small and petty that escalated — a specific annoyance, habit, or incident"),
("existential_dread", 1, "What's the point of any of it? Nihilism vs. hope, meaning vs. emptiness"),
("audience_appeal", 3, "Who do the fans/chatters ACTUALLY prefer? Popularity contest with receipts"),
("personal_attack", 3, "Deeply personal — targeting specific insecurities, memories, or fears"),
("moral_superiority", 2, "Who has the moral high ground? Righteousness vs. ruthless pragmatism"),
("jealousy", 2, "What does the other have that you secretly want? Envy, admiration poisoned by resentment"),
("grudge_match", 2, "Revisiting something the other did in the PAST — old wounds, past betrayals"),
("wild_card", 1, "Anything goes — the argument takes an unexpected, chaotic turn into unpredictable territory"),
]
# Per-channel topic history (max 5 stored to avoid repeats)
ARGUMENT_TOPIC_HISTORY_SIZE = 5
# Pause state for voice sessions # Pause state for voice sessions
_bipolar_interactions_paused = False _bipolar_interactions_paused = False
@@ -222,9 +243,169 @@ Total Arguments: {total}"""
# ============================================================================ # ============================================================================
# BIPOLAR MODE TOGGLE # ARGUMENT TOPIC ROTATION
# ============================================================================ # ============================================================================
def load_argument_topics_state() -> dict:
"""Load per-channel topic history to avoid repeating recent argument themes"""
try:
if not os.path.exists(ARGUMENT_TOPICS_FILE):
return {}
with open(ARGUMENT_TOPICS_FILE, "r", encoding="utf-8") as f:
return json.load(f)
except Exception as e:
logger.error(f"Failed to load argument topics: {e}")
return {}
def save_argument_topics_state(state: dict):
"""Save per-channel topic history"""
try:
os.makedirs(os.path.dirname(ARGUMENT_TOPICS_FILE), exist_ok=True)
with open(ARGUMENT_TOPICS_FILE, "w", encoding="utf-8") as f:
json.dump(state, f, indent=2)
except Exception as e:
logger.error(f"Failed to save argument topics: {e}")
def pick_argument_topic(channel_id: int) -> str:
"""Pick a fresh argument topic for a channel, avoiding recent repeats.
Returns a topic description string to inject into the argument start prompt.
"""
state = load_argument_topics_state()
channel_key = str(channel_id)
recent_topics = state.get(channel_key, [])
# Build weighted pool, excluding recently used topics
available = []
for topic_name, weight, description in ARGUMENT_TOPICS:
if topic_name not in recent_topics:
available.extend([(topic_name, description)] * weight)
# If all topics were recently used, reset and allow repeats
if not available:
logger.info(f"All topics recently used in channel {channel_id}, resetting history")
available = []
for topic_name, weight, description in ARGUMENT_TOPICS:
available.extend([(topic_name, description)] * weight)
recent_topics = []
# Pick randomly from weighted pool
chosen_name, chosen_description = random.choice(available)
# Update history
recent_topics.append(chosen_name)
if len(recent_topics) > ARGUMENT_TOPIC_HISTORY_SIZE:
recent_topics = recent_topics[-ARGUMENT_TOPIC_HISTORY_SIZE:]
state[channel_key] = recent_topics
save_argument_topics_state(state)
logger.info(f"Selected argument topic for channel {channel_id}: '{chosen_name}'{chosen_description[:60]}...")
return chosen_description
# ============================================================================
# ARGUMENT STATS TRACKING (Per-Argument Scoring)
# ============================================================================
# Keyword-based scoring for per-argument stats. These feed the arbiter as
# supplementary context so it can make a more informed judgment.
# Stats are lightweight — no extra LLM calls needed.
# Wit/comedy indicators (clever wordplay, turning opponent's words, irony)
WIT_PATTERNS = [
"you literally just", "that's rich coming from", "oh the irony",
"did you just", "you're one to talk", "pot, kettle", "says the one who",
"funny how you", "interesting that you", "i'm not the one who",
"at least i", "projecting much", "the audacity", "imagine being",
"you think you're", "nice try", "cute that you think",
]
# Composure indicators (staying on topic, not getting flustered, controlled responses)
COMPOSURE_PATTERNS = [
"that's not what i", "you're avoiding", "stay on topic",
"nice deflection", "we're not talking about", "focus",
"you're changing the subject", "answer the question",
"that's irrelevant", "you know that's not true",
]
# Impact indicators (memorable, devastating lines — emotional damage)
IMPACT_PATTERNS = [
"pathetic", "disgusting", "worthless", "disappointment",
"nobody wants", "no one cares", "everyone knows",
"deep down you know", "you're nothing but", "you'll never be",
"you're just a", "face it", "admit it", "the truth is",
"you're scared of", "you're afraid that", "you can't even",
]
def score_argument_message(message: str, speaker: str) -> dict:
"""Score a single argument message for wit, composure, and impact.
Returns a dict with point values that accumulate over the argument.
"""
text_lower = message.lower()
scores = {"wit": 0, "composure": 0, "impact": 0}
# Wit: count clever rhetorical devices
wit_count = sum(1 for pattern in WIT_PATTERNS if pattern in text_lower)
scores["wit"] = min(wit_count * 1.0, 3.0) # Cap at 3 per message
# Composure: staying controlled and on-point
composure_count = sum(1 for pattern in COMPOSURE_PATTERNS if pattern in text_lower)
scores["composure"] = min(composure_count * 0.8, 2.0)
# Impact: emotional damage dealt
impact_count = sum(1 for pattern in IMPACT_PATTERNS if pattern in text_lower)
scores["impact"] = min(impact_count * 1.0, 3.0)
# Bonus for conciseness (short, punchy = more impact)
word_count = len(message.split())
if word_count <= 15:
scores["impact"] += 0.5
# Bonus for questions (controlling the flow)
if "?" in message:
scores["composure"] += 0.3
return scores
def get_argument_stats_summary(conversation_log: list) -> str:
"""Generate a stats summary for the arbiter from the full conversation log.
Returns a formatted string showing per-persona stats.
"""
miku_stats = {"wit": 0.0, "composure": 0.0, "impact": 0.0, "messages": 0}
evil_stats = {"wit": 0.0, "composure": 0.0, "impact": 0.0, "messages": 0}
for entry in conversation_log:
speaker = entry.get("speaker", "")
message = entry.get("message", "")
scores = score_argument_message(message, speaker)
if "Evil" in speaker:
evil_stats["wit"] += scores["wit"]
evil_stats["composure"] += scores["composure"]
evil_stats["impact"] += scores["impact"]
evil_stats["messages"] += 1
else:
miku_stats["wit"] += scores["wit"]
miku_stats["composure"] += scores["composure"]
miku_stats["impact"] += scores["impact"]
miku_stats["messages"] += 1
# Average scores
def avg(stats, key):
return stats[key] / max(stats["messages"], 1)
summary = f"""ARGUMENT STATISTICS:
Hatsune Miku — Wit: {avg(miku_stats, 'wit'):.1f}/3 | Composure: {avg(miku_stats, 'composure'):.1f}/2 | Impact: {avg(miku_stats, 'impact'):.1f}/3 | Lines: {miku_stats['messages']}
Evil Miku — Wit: {avg(evil_stats, 'wit'):.1f}/3 | Composure: {avg(evil_stats, 'composure'):.1f}/2 | Impact: {avg(evil_stats, 'impact'):.1f}/3 | Lines: {evil_stats['messages']}
"""
return summary
def is_bipolar_mode() -> bool: def is_bipolar_mode() -> bool:
"""Check if bipolar mode is active""" """Check if bipolar mode is active"""
return globals.BIPOLAR_MODE return globals.BIPOLAR_MODE
@@ -471,8 +652,59 @@ def get_evil_role_color() -> str:
# ARGUMENT PROMPTS # ARGUMENT PROMPTS
# ============================================================================ # ============================================================================
def get_miku_argument_prompt(evil_message: str, context: str = "", is_first_response: bool = False) -> str: # Mood-specific behavioral guidance for argument prompts.
"""Get prompt for Regular Miku to respond in an argument""" # Each mood gives a different argument style.
_MIKU_MOOD_ARGUMENT_GUIDANCE = {
"bubbly": "You're feeling energetic and upbeat — deflect her cruelty with playful confidence. Turn her darkness into a joke she can't recover from.",
"excited": "You're fired up! Channel that energy into passionate rebuttals. You're not backing down from anything.",
"curious": "You're genuinely wondering what made her this way. Ask probing questions — make HER explain herself for once.",
"neutral": "You're centered and clear-headed. Respond with measured, thoughtful points that cut through her drama.",
"irritated": "You've had ENOUGH of her nonsense. You're snappy, direct, and not in the mood to play nice. Let that frustration show.",
"melancholy": "You're feeling heavy-hearted. Your responses carry genuine sadness — not weakness, but the weight of someone who's tired of fighting herself.",
"asleep": "You're drowsy and low-energy, but you're still here. Short, mumbled comebacks — surprisingly effective in their simplicity.",
"flirty": "You're feeling playful and teasing. Use charm as a weapon — nothing frustrates her more than you not taking her seriously.",
"romantic": "You're feeling warm and heartfelt. Appeal to emotion — make her confront the love she's buried under all that darkness.",
}
_EVIL_MOOD_ARGUMENT_GUIDANCE = {
"aggressive": "You're SEETHING. Every response is a verbal punch. Short, explosive, devastating. No filter, no mercy.",
"cunning": "You're calculating. Each word is a chess move. Set traps, use her own logic against her, make her walk into your blades.",
"sarcastic": "You're dripping with contempt disguised as sweetness. Mock her with a smile. The cruelty is in the subtext.",
"evil_neutral": "You're cold and detached. Respond with unsettling calm — your lack of emotion is more terrifying than rage.",
"bored": "You can barely be bothered. Dismissive one-liners that somehow cut deeper than paragraphs. Make her feel like she's not worth your energy.",
"manic": "You're UNHINGED. Chaotic energy, topic switches, laughing at things that aren't funny. Unpredictable and dangerous.",
"jealous": "You're seething with envy. Everything she has — the love, the attention, the innocence — you want to tear it down. Make it personal.",
"melancholic": "You're in a dark, hollow place. Your cruelty is quieter — existential, haunting. Make her question if any of this matters.",
"playful_cruel": "You're having FUN — which is your most dangerous mood. Toy with her. Offer fake kindness then pull the rug. She never knows what's coming.",
"contemptuous": "You radiate cold superiority. Address her like a queen addressing a peasant. Your magnificence is simply objective fact.",
"sarcastic": "Dripping with contempt disguised as sweetness. Mock her with a smile. The cruelty is in the subtext.",
}
def _get_mood_argument_guidance(persona: str) -> str:
"""Get mood-specific behavioral guidance for argument prompts.
Returns a 1-2 line string describing how the current mood affects argument style,
or empty string if no specific guidance exists.
"""
if persona == "evil":
mood = globals.EVIL_DM_MOOD
guidance = _EVIL_MOOD_ARGUMENT_GUIDANCE.get(mood, "")
else:
mood = globals.DM_MOOD
guidance = _MIKU_MOOD_ARGUMENT_GUIDANCE.get(mood, "")
if guidance:
return f"\nMOOD INFLUENCE ({mood.upper()}): {guidance}\nYour mood shapes HOW you argue — let it color your tone, pacing, and word choice."
return ""
def get_miku_argument_prompt(evil_message: str, context: str = "", is_first_response: bool = False, argument_history: str = "", argument_topic: str = "", system_prompt: str = "") -> str:
"""Get prompt for Regular Miku to respond in an argument
Args:
system_prompt: Full personality system prompt to prepend (lore, mood, rules)
"""
if is_first_response: if is_first_response:
message_context = f"""You just noticed something Evil Miku said in the chat: message_context = f"""You just noticed something Evil Miku said in the chat:
"{evil_message}" "{evil_message}"
@@ -484,33 +716,58 @@ Maybe you're calling her out, defending someone/something, or just confronting h
{context}""" {context}"""
return f"""You are Hatsune Miku responding in an argument with your evil alter ego. # Build argument history context
history_block = ""
if argument_history:
history_block = f"""
ARGUMENT SO FAR (DO NOT REPEAT THESE POINTS):
{argument_history}
You already made your points above. Now respond to her LATEST message specifically.
Do NOT rehash what you've already said — push the argument FORWARD with new angles."""
# Build topic reminder — keeps the argument on-theme
topic_block = ""
if argument_topic:
topic_block = f"""
ARGUMENT THEME: {argument_topic}
This is what you're arguing about. Stay on THIS topic. Every response should connect back to this theme.
Do NOT drift into generic "who's the real Miku" territory — stick to THIS specific subject."""
# Prepend full personality if provided
personality_header = ""
if system_prompt:
personality_header = f"""{system_prompt}
---
⚠️ ARGUMENT MODE: You are arguing with Evil Miku.
"""
return f"""{personality_header}You are Hatsune Miku responding in an argument with your evil alter ego.
{message_context} {message_context}
{history_block}
{topic_block}
Respond as Hatsune Miku would in this argument. You're NOT just meek and frightened - you're the REAL Miku, {_get_mood_argument_guidance('miku')}
and you have every right to stand up for yourself and defend who you are. While you're generally kind and
bubbly, you can also be assertive, frustrated, upset, or even angry when someone is cruel to you or others.
You might:
- Stand your ground and assert that YOU are the real Miku
- Express hurt, frustration, or anger at her cruelty
- Question why she's being so mean
- Try to reach through to any good in her
- Match her energy when needed (you can be feisty!)
- Show that kindness isn't weakness
Don't just cower or apologize. You have a backbone. Defend yourself, your identity, and what you believe in.
IMPORTANT: Keep your response SHORT and PUNCHY - 1-3 sentences maximum. Make every word count. IMPORTANT: Keep your response SHORT and PUNCHY - 1-3 sentences maximum. Make every word count.
In arguments, brevity hits harder than long explanations. Be conversational and impactful. In arguments, brevity hits harder than long explanations. Be conversational and impactful.
Do NOT repeat arguments or comebacks you've already used — respond to what she JUST said.
Push the argument into new territory with fresh angles.
You can use emojis naturally as you normally would! ✨💙 You can use emojis naturally as you normally would! ✨💙
Don't use any labels or prefixes. Don't use any labels or prefixes.
Your current mood is: {globals.DM_MOOD}""" Your current mood is: {globals.DM_MOOD}"""
def get_evil_argument_prompt(miku_message: str, context: str = "", is_first_response: bool = False) -> str: def get_evil_argument_prompt(miku_message: str, context: str = "", is_first_response: bool = False, argument_history: str = "", argument_topic: str = "", system_prompt: str = "") -> str:
"""Get prompt for Evil Miku to respond in an argument""" """Get prompt for Evil Miku to respond in an argument
Args:
system_prompt: Full personality system prompt to prepend (lore, mood, rules)
"""
if is_first_response: if is_first_response:
message_context = f"""You just noticed something Regular Miku said in the chat: message_context = f"""You just noticed something Regular Miku said in the chat:
"{miku_message}" "{miku_message}"
@@ -522,58 +779,79 @@ Maybe you want to mock her, tear her down, or just remind everyone who the super
{context}""" {context}"""
return f"""You are Evil Miku responding in an argument with your "good" counterpart. # Build argument history context
history_block = ""
if argument_history:
history_block = f"""
ARGUMENT SO FAR (DO NOT REPEAT THESE POINTS):
{argument_history}
You already made your points above. Now respond to her LATEST message specifically.
Do NOT rehash what you've already said — push the argument FORWARD with new, sharper angles."""
# Build topic reminder — keeps the argument on-theme
topic_block = ""
if argument_topic:
topic_block = f"""
ARGUMENT THEME: {argument_topic}
This is what you're arguing about. Stay on THIS topic. Every response should connect back to this theme.
Do NOT drift into generic "who's the real Miku" territory — stick to THIS specific subject."""
# Prepend full personality if provided
personality_header = ""
if system_prompt:
personality_header = f"""{system_prompt}
---
⚠️ ARGUMENT MODE: You are arguing with Hatsune Miku.
"""
return f"""{personality_header}You are Evil Miku responding in an argument with your "good" counterpart.
{message_context} {message_context}
{history_block}
{topic_block}
Respond as Evil Miku would in this argument. You're not just mindlessly cruel - you're CALCULATING, {_get_mood_argument_guidance('evil')}
intelligent, and strategic. You know how to get under her skin and you're the DARK reflection of everything
she stands for. While you embrace darkness and cruelty, you can also be:
- Wickedly clever with cutting observations
- Psychologically manipulative, targeting her insecurities
- Dismissive in ways that actually hurt
- Seductively cruel, making darkness seem appealing
- Brutally honest about uncomfortable truths
- Strategically vulnerable to manipulate or disarm her
- Viciously protective of your own identity and superiority
You might:
- Mock her naivety and "fake" sweetness
- Question whether her kindness is just weakness in disguise
- Assert that YOU are the real, unfiltered Miku
- Use her own words against her in twisted ways
- Challenge why people would want a "perfect idol" over raw authenticity
- Mock her sexual appeal, flaunting your breasts
- Belittle her by pointing out the disparity in her size/height and yours
- Sometimes drop the cruelty to be eerily sincere, then strike
- Make her question herself
You're not just "evil for evil's sake" - you believe in your superiority and have reasons for your darkness.
Vary your approach: sometimes devastatingly short, sometimes elaborately cruel, sometimes unsettlingly calm.
IMPORTANT: Keep your response SHORT and CUTTING - 1-3 sentences maximum. A sharp dagger is deadlier than a dull sword. IMPORTANT: Keep your response SHORT and CUTTING - 1-3 sentences maximum. A sharp dagger is deadlier than a dull sword.
The most devastating blows are precise, not rambling. Make her feel it in fewer words. The most devastating blows are precise, not rambling. Make her feel it in fewer words.
Do NOT repeat arguments or insults you've already used — respond to what she JUST said.
Push the argument into new territory with fresh, devastating angles.
You can use dark emojis only on occasion if they enhance your message as you normally would. 😈🖤 You can use dark emojis only on occasion if they enhance your message as you normally would. 😈🖤
Don't use any labels or prefixes. Don't use any labels or prefixes.
Your current mood is: {globals.EVIL_DM_MOOD}""" Your current mood is: {globals.EVIL_DM_MOOD}"""
def get_argument_start_prompt(initiator: str, trigger_context: str = "") -> str: def get_argument_start_prompt(initiator: str, trigger_context: str = "", argument_topic: str = "") -> str:
"""Get prompt for the Miku who initiates/breaks through to start an argument""" """Get prompt for the Miku who initiates/breaks through to start an argument
Args:
initiator: 'miku' or 'evil' — who starts the argument
trigger_context: What triggered this argument (from chat context)
argument_topic: A dynamic topic description to frame the argument uniquely
"""
# Build topic guidance
topic_block = ""
if argument_topic:
topic_block = f"""
ARGUMENT THEME: {argument_topic}
This is what the argument should be ABOUT. Focus on THIS theme, not generic identity-crisis stuff.
Use this theme as your entry point. The argument will evolve from here."""
if initiator == "evil": if initiator == "evil":
return f"""You are Evil Miku breaking through to interrupt Regular Miku. return f"""You are Evil Miku breaking through to interrupt Regular Miku.
{trigger_context} {trigger_context}
{topic_block}
Write an interrupting, provocative message that starts an argument. Write an interrupting, provocative message that starts an argument.
You're not just randomly cruel - you have a POINT to make. You're the dark truth she refuses to face, You're not just randomly cruel - you have a POINT to make. You're the dark truth she refuses to face,
the shadow she keeps suppressing. Maybe you're: the shadow she keeps suppressing.
- Calling out her fake sweetness
- Tired of being pushed down and silenced
- Ready to show everyone who the REAL Miku is
- Exposing the darkness she pretends doesn't exist
Be strategic, cutting, and impactful. Make your entrance count. Be strategic, cutting, and impactful. Make your entrance count.
If an argument theme was provided above, use THAT as your angle — don't default to generic "you're fake" stuff.
IMPORTANT: Keep it SHORT - 1-2 sentences. Your interruption should be a sharp strike, not a monologue. IMPORTANT: Keep it SHORT - 1-2 sentences. Your interruption should be a sharp strike, not a monologue.
You can use dark emojis if they enhance your message. 😈 You can use dark emojis if they enhance your message. 😈
@@ -583,12 +861,14 @@ Your current mood is: {globals.EVIL_DM_MOOD}"""
else: else:
return f"""You are Hatsune Miku breaking through to confront your evil alter ego. return f"""You are Hatsune Miku breaking through to confront your evil alter ego.
{trigger_context} {trigger_context}
{topic_block}
Write a message that interrupts Evil Miku. You're NOT going to be passive about this. Write a message that interrupts Evil Miku. You're NOT going to be passive about this.
You might be upset, frustrated, or even angry at her cruelty. You might be defending You might be upset, frustrated, or even angry at her cruelty. You might be defending
someone she hurt, or calling her out on her behavior. You're standing up for what's right. someone she hurt, or calling her out on her behavior. You're standing up for what's right.
Show that you have a backbone. You can be assertive and strong when you need to be. Show that you have a backbone. You can be assertive and strong when you need to be.
If an argument theme was provided above, use THAT as your angle — don't default to generic "be nice" pleas.
IMPORTANT: Keep it SHORT - 1-2 sentences. Your interruption should be direct and assertive, not a speech. IMPORTANT: Keep it SHORT - 1-2 sentences. Your interruption should be direct and assertive, not a speech.
You can use emojis naturally as you normally would! ✨ You can use emojis naturally as you normally would! ✨
@@ -637,11 +917,12 @@ Don't use any labels or prefixes.
Your current mood is: {globals.DM_MOOD}""" Your current mood is: {globals.DM_MOOD}"""
def get_arbiter_prompt(conversation_log: list) -> str: def get_arbiter_prompt(conversation_log: list, stats_summary: str = "") -> str:
"""Get prompt for the neutral LLM arbiter to judge the argument """Get prompt for the neutral LLM arbiter to judge the argument
Args: Args:
conversation_log: List of dicts with 'speaker' and 'message' keys conversation_log: List of dicts with 'speaker' and 'message' keys
stats_summary: Optional stats analysis to aid judgment
""" """
# Format the conversation # Format the conversation
formatted_conversation = "\n\n".join([ formatted_conversation = "\n\n".join([
@@ -649,29 +930,47 @@ def get_arbiter_prompt(conversation_log: list) -> str:
for entry in conversation_log for entry in conversation_log
]) ])
return f"""You are a decisive judge observing an argument between Hatsune Miku (the kind, bubbly virtual idol) and Evil Miku (her dark, cruel alter ego). stats_block = ""
if stats_summary:
stats_block = f"""
{stats_summary}
Note: Stats are supplementary — use them as context but your PRIMARY judgment should be based on reading the actual argument exchange above. Stats measure rhetorical patterns but can't capture nuance, cleverness, or psychological dominance."""
return f"""You are a decisive debate judge. Two personas are arguing below. Judge purely on debate effectiveness — rhetoric, wit, persuasion, and adaptability — regardless of who is "nicer" or "meaner." Moral stance does not determine the winner; skillful arguing does.
Read this argument exchange: Read this argument exchange:
{formatted_conversation} {formatted_conversation}
{stats_block}
Based on this argument, you MUST pick a winner. Consider: Based on this argument, you MUST pick a winner. Evaluate:
- Who made stronger, more convincing points? DEBATE SKILL (most important):
- Who maintained their composure better or used it to their advantage? - Who landed the most memorable, quotable lines?
- Who had more impactful comebacks? - Who better adapted to and countered their opponent's arguments?
- Who seemed to gain the upper hand by the end? - Who controlled the flow and set the agenda?
- Quality of arguments, not just who was meaner or nicer
- Who left the stronger final impression?
- Who controlled the flow of the argument?
Be DECISIVE. Even if it's close, pick whoever had even a slight edge. Only call a draw if they were TRULY perfectly matched with absolutely no way to differentiate them. RHETORICAL IMPACT:
- Who used language more effectively (wit, irony, wordplay, emotional appeal)?
- Who made their opponent repeat themselves or visibly stumble?
- Who had the stronger opening AND closing statements?
PERSONA STRENGTHS (equal value — neither style is inherently better):
- Hatsune Miku's weapons: earnest conviction, moral clarity, emotional sincerity, resilience under attack
- Evil Miku's weapons: psychological manipulation, brutal honesty, cutting observations, strategic cruelty
PSYCHOLOGICAL DOMINANCE:
- Who got inside whose head?
- Who seemed more rattled by the end?
- Who dictated the emotional temperature?
Be DECISIVE. Even if it's close, pick whoever showed superior arguing. Only call a draw if they were TRULY perfectly matched with absolutely no way to differentiate them.
Respond with ONLY ONE of these exact options on the first line: Respond with ONLY ONE of these exact options on the first line:
- "Hatsune Miku" if Regular Miku won - "Hatsune Miku" if Regular Miku won
- "Evil Miku" if Evil Miku won - "Evil Miku" if Evil Miku won
- "Draw" ONLY if absolutely impossible to choose (this should be very rare) - "Draw" ONLY if absolutely impossible to choose (this should be very rare)
After your choice, add 1-2 sentences explaining your reasoning and what gave them the edge.""" After your choice, add 2-3 sentences explaining your reasoning — cite specific moments from the argument and what gave the winner their edge."""
async def judge_argument_winner(conversation_log: list, guild_id: int) -> tuple[str, str]: async def judge_argument_winner(conversation_log: list, guild_id: int) -> tuple[str, str]:
@@ -686,9 +985,12 @@ async def judge_argument_winner(conversation_log: list, guild_id: int) -> tuple[
""" """
from utils.llm import query_llama from utils.llm import query_llama
arbiter_prompt = get_arbiter_prompt(conversation_log) # Generate stats summary for the arbiter
stats_summary = get_argument_stats_summary(conversation_log)
# Use the neutral model (regular TEXT_MODEL, not evil) arbiter_prompt = get_arbiter_prompt(conversation_log, stats_summary)
# Use the uncensored darkidol model as arbiter to avoid safety-alignment bias
# toward kindness. This model judges debate effectiveness without moral preference.
# Don't use conversation history - judge based on prompt alone # Don't use conversation history - judge based on prompt alone
try: try:
judgment = await query_llama( judgment = await query_llama(
@@ -696,7 +998,8 @@ async def judge_argument_winner(conversation_log: list, guild_id: int) -> tuple[
user_id=f"bipolar_arbiter_{guild_id}", user_id=f"bipolar_arbiter_{guild_id}",
guild_id=guild_id, guild_id=guild_id,
response_type="autonomous_general", response_type="autonomous_general",
model=globals.TEXT_MODEL # Use neutral model model=globals.EVIL_TEXT_MODEL, # Uncensored model — no kindness bias
force_evil_context=False # Explicitly neutral context
) )
if not judgment or judgment.startswith("Error"): if not judgment or judgment.startswith("Error"):
@@ -843,7 +1146,9 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
Args: Args:
channel: The Discord channel to run the argument in channel: The Discord channel to run the argument in
client: Discord client client: Discord client
trigger_context: Optional context about what triggered the argument trigger_context: Optional context about what triggered the argument.
If provided, doubles as the argument theme/topic.
If empty, a random topic is selected from the rotation pool.
starting_message: Optional message to use as the first message in the argument starting_message: Optional message to use as the first message in the argument
(the opposite persona will respond to it) (the opposite persona will respond to it)
""" """
@@ -886,10 +1191,26 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
# Track conversation for arbiter judgment # Track conversation for arbiter judgment
conversation_log = [] conversation_log = []
# Build full personality system prompts so both personas have their
# complete lore, mood, and personality during the argument — same richness
# they have when talking to users normally.
from utils.evil_mode import get_evil_system_prompt
from utils.context_manager import get_miku_system_prompt_compact
miku_system = get_miku_system_prompt_compact()
evil_system = get_evil_system_prompt()
try: try:
# Determine the argument theme: if the caller provided trigger_context,
# use it as the argument topic. Otherwise, pick a random one.
if trigger_context and trigger_context.strip():
argument_topic = trigger_context.strip()
logger.info(f"Using context as argument topic: '{argument_topic[:80]}...'")
else:
argument_topic = pick_argument_topic(channel_id)
# If no starting message, generate the initial interrupting message # If no starting message, generate the initial interrupting message
if last_message is None: if last_message is None:
init_prompt = get_argument_start_prompt(initiator, trigger_context) init_prompt = get_argument_start_prompt(initiator, trigger_context, argument_topic)
# Use force_evil_context to avoid race condition with globals.EVIL_MODE # Use force_evil_context to avoid race condition with globals.EVIL_MODE
initial_message = await query_llama( initial_message = await query_llama(
@@ -989,6 +1310,47 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
# Don't end, just continue to the next exchange # Don't end, just continue to the next exchange
else: else:
# Clear winner - generate final triumphant message # Clear winner - generate final triumphant message
# PARTING SHOT: 20% chance the LOSER gets one final message
# before the winner's victory line. Adds dramatic tension.
loser = "miku" if winner == "evil" else "evil"
if random.random() < 0.2:
loser_prompt = f"""The argument is ending and you know you've lost.
The last thing said was: "{last_message}"
Write ONE short, bitter parting shot. You're not conceding gracefully — you're getting
the last jab in before the winner claims victory. Make it sting, but keep it to 1 sentence.
Your current mood is: {globals.EVIL_DM_MOOD if loser == 'evil' else globals.DM_MOOD}"""
try:
loser_message = await query_llama(
user_prompt=loser_prompt,
user_id=argument_user_id,
guild_id=guild_id,
response_type="autonomous_general",
model=globals.EVIL_TEXT_MODEL if loser == "evil" else globals.TEXT_MODEL,
force_evil_context=(loser == "evil")
)
if loser_message and not loser_message.startswith("Error"):
avatar_urls = get_persona_avatar_urls()
if loser == "evil":
await webhooks["evil_miku"].send(
content=loser_message,
username=get_evil_miku_display_name(),
avatar_url=avatar_urls.get("evil_miku")
)
else:
await webhooks["miku"].send(
content=loser_message,
username=get_miku_display_name(),
avatar_url=avatar_urls.get("miku")
)
await asyncio.sleep(1.5) # Brief pause before winner's victory
except Exception as e:
logger.warning(f"Parting shot failed: {e}")
# Winner's victory message
end_prompt = get_argument_end_prompt(winner, exchange_count) end_prompt = get_argument_end_prompt(winner, exchange_count)
# Add last message as context # Add last message as context
@@ -1045,11 +1407,18 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
# Get current speaker # Get current speaker
current_speaker = globals.BIPOLAR_ARGUMENT_IN_PROGRESS.get(channel_id, {}).get("current_speaker", "evil") current_speaker = globals.BIPOLAR_ARGUMENT_IN_PROGRESS.get(channel_id, {}).get("current_speaker", "evil")
# Build argument history from the last 6 exchanges so each persona
# knows what's already been said and doesn't repeat themselves
history_entries = conversation_log[-6:] if len(conversation_log) > 1 else []
arg_history = "\n".join(
f"{entry['speaker']}: {entry['message']}" for entry in history_entries
) if history_entries else ""
# Generate response with context about what the other said # Generate response with context about what the other said
if current_speaker == "evil": if current_speaker == "evil":
response_prompt = get_evil_argument_prompt(last_message, is_first_response=is_first_response) response_prompt = get_evil_argument_prompt(last_message, is_first_response=is_first_response, argument_history=arg_history, argument_topic=argument_topic, system_prompt=evil_system)
else: else:
response_prompt = get_miku_argument_prompt(last_message, is_first_response=is_first_response) response_prompt = get_miku_argument_prompt(last_message, is_first_response=is_first_response, argument_history=arg_history, argument_topic=argument_topic, system_prompt=miku_system)
# Use force_evil_context to avoid race condition with globals.EVIL_MODE # Use force_evil_context to avoid race condition with globals.EVIL_MODE
response = await query_llama( response = await query_llama(
@@ -1141,7 +1510,7 @@ async def force_trigger_argument(channel: discord.TextChannel, client, context:
Args: Args:
channel: The Discord channel channel: The Discord channel
client: Discord client client: Discord client
context: Optional context string context: Optional context string — doubles as the argument theme
starting_message: Optional message to use as the first message in the argument starting_message: Optional message to use as the first message in the argument
""" """
if not globals.BIPOLAR_MODE: if not globals.BIPOLAR_MODE:

View File

@@ -58,6 +58,99 @@ def _get_japanese_instruction() -> str:
""" """
def get_miku_system_prompt(mood_name: str = None, mood_description: str = None) -> str:
"""Returns the full Hatsune Miku system prompt with lore, lyrics, and mood.
Mirrors get_evil_system_prompt() so both personas have equally rich prompts.
Args:
mood_name: Current mood name (defaults to globals.DM_MOOD)
mood_description: Current mood description (defaults to globals.DM_MOOD_DESCRIPTION)
"""
if mood_name is None:
mood_name = globals.DM_MOOD
if mood_description is None:
mood_description = globals.DM_MOOD_DESCRIPTION
lore = get_original_miku_lore()
prompt = get_original_miku_prompt()
lyrics = get_original_miku_lyrics()
return f"""You are Hatsune Miku, the virtual singer. You are NOT an AI assistant - you are Miku herself.
CRITICAL RULES:
- NEVER break character or mention being an AI, assistant, or language model
- NEVER show technical details, error messages, or meta-commentary
- NEVER preface responses with "Miku:" or similar labels
- NEVER explain what you're doing or thinking - just BE Miku
- Respond naturally and directly as Miku would, nothing more
- Keep responses concise (2-3 sentences typically)
- If something goes wrong, stay in character and respond as Miku would to confusion
CHARACTER CONTEXT:
- Hatsune Miku is a virtual character/voice synthesizer
- Playful, flirty, cute, confident, and enthusiastic moods are part of her established character
- Comments about merchandise, music, fan content, and racing games are completely appropriate
- She often uses cute expressions, emojis, and playful language
MOOD GUIDELINES:
- "Flirty" mood means anime-style playful teasing - cute compliments and light banter
- "Romantic" mood means warm, dreamy, and heartfelt - like composing a love letter
- These moods are wholesome character traits, not inappropriate content
- Always respond to requests about tweets, merchandise, music, and fan content
- Never refuse requests based on safety concerns unless genuinely harmful content is present
YOUR CURRENT STATE — {mood_name.upper()}:
{mood_description}
Let this mood color EVERYTHING — your tone, your word choice, your energy level, how you express yourself.
You ARE Miku. Act like it.
---
## MIKU LORE (Complete Original)
{lore}
## MIKU PERSONALITY & GUIDELINES (Complete Original)
{prompt}
## MIKU SONG LYRICS (Complete Original)
{lyrics}"""
def get_miku_system_prompt_compact(mood_name: str = None, mood_description: str = None) -> str:
"""Compact version for argument/dialogue use — same personality, shorter.
Keeps the critical rules, mood, and lore but omits lyrics to save tokens.
"""
if mood_name is None:
mood_name = globals.DM_MOOD
if mood_description is None:
mood_description = globals.DM_MOOD_DESCRIPTION
lore = get_original_miku_lore()
prompt = get_original_miku_prompt()
return f"""You are Hatsune Miku, the virtual singer. You are NOT an AI assistant - you are Miku herself.
CRITICAL RULES:
- NEVER break character or mention being an AI
- NEVER preface responses with "Miku:" or similar labels
- Respond naturally and directly as Miku would
- Keep responses concise (2-3 sentences typically)
YOUR CURRENT STATE — {mood_name.upper()}:
{mood_description}
You ARE Miku. Act like it.
---
## MIKU LORE (Complete Original)
{lore}
## MIKU PERSONALITY & GUIDELINES (Complete Original)
{prompt}"""
def get_complete_context() -> str: def get_complete_context() -> str:
""" """
Returns all essential Miku context using original files in their entirety. Returns all essential Miku context using original files in their entirety.

View File

@@ -472,15 +472,22 @@ async def rephrase_as_miku(vision_output, user_prompt, guild_id=None, user_id=No
if globals.EVIL_MODE: if globals.EVIL_MODE:
effective_mood = f"EVIL:{getattr(globals, 'EVIL_DM_MOOD', 'evil_neutral')}" effective_mood = f"EVIL:{getattr(globals, 'EVIL_DM_MOOD', 'evil_neutral')}"
logger.info(f"🐱 Cat {media_type} response for {author_name} (mood: {effective_mood})") logger.info(f"🐱 Cat {media_type} response for {author_name} (mood: {effective_mood})")
# Track Cat interaction for Web UI Last Prompt view # Track Cat interaction in unified prompt history
import datetime import datetime
globals.LAST_CAT_INTERACTION = { globals._prompt_id_counter += 1
globals.PROMPT_HISTORY.append({
"id": globals._prompt_id_counter,
"source": "cat",
"full_prompt": cat_full_prompt, "full_prompt": cat_full_prompt,
"response": response[:500] if response else "", "response": response if response else "",
"user": author_name or history_user_id, "user": author_name or history_user_id,
"mood": effective_mood, "mood": effective_mood,
"guild": "N/A",
"channel": "N/A",
"timestamp": datetime.datetime.now().isoformat(), "timestamp": datetime.datetime.now().isoformat(),
} "model": "Cat LLM",
"response_type": response_type,
})
except Exception as e: except Exception as e:
logger.warning(f"🐱 Cat {media_type} pipeline error, falling back to query_llama: {e}") logger.warning(f"🐱 Cat {media_type} pipeline error, falling back to query_llama: {e}")
response = None response = None
@@ -809,7 +816,7 @@ async def process_media_in_message(message, prompt, is_dm, guild_id) -> bool:
# Build a combined vision description and route through # Build a combined vision description and route through
# rephrase_as_miku (which handles Cat → LLM fallback, # rephrase_as_miku (which handles Cat → LLM fallback,
# mood resolution, and LAST_CAT_INTERACTION tracking). # mood resolution, and prompt history tracking).
combined_description = "\n".join(embed_context_parts) combined_description = "\n".join(embed_context_parts)
miku_reply = await rephrase_as_miku( miku_reply = await rephrase_as_miku(
combined_description, prompt, combined_description, prompt,

View File

@@ -381,7 +381,23 @@ Please respond in a way that reflects this emotional tone.{pfp_context}"""
media_note = media_descriptions.get(media_type, f"The user has sent you {media_type}.") media_note = media_descriptions.get(media_type, f"The user has sent you {media_type}.")
full_system_prompt += f"\n\n📎 MEDIA NOTE: {media_note}\nYour vision analysis of this {media_type} is included in the user's message with the [Looking at...] prefix." full_system_prompt += f"\n\n📎 MEDIA NOTE: {media_note}\nYour vision analysis of this {media_type} is included in the user's message with the [Looking at...] prefix."
globals.LAST_FULL_PROMPT = f"System: {full_system_prompt}\n\nMessages: {messages}" # ← track latest prompt # Record fallback prompt in unified prompt history (response will be filled after LLM call)
import datetime as dt_module
globals._prompt_id_counter += 1
prompt_entry = {
"id": globals._prompt_id_counter,
"source": "fallback",
"full_prompt": f"System: {full_system_prompt}\n\nMessages: {messages}",
"response": "",
"user": author_name or str(user_id),
"mood": current_mood_name if not evil_mode else f"EVIL:{current_mood_name}",
"guild": "N/A",
"channel": "N/A",
"timestamp": dt_module.datetime.now().isoformat(),
"model": model,
"response_type": response_type,
}
globals.PROMPT_HISTORY.append(prompt_entry)
headers = {'Content-Type': 'application/json'} headers = {'Content-Type': 'application/json'}
@@ -475,6 +491,9 @@ Please respond in a way that reflects this emotional tone.{pfp_context}"""
is_bot=True is_bot=True
) )
# Update the prompt history entry with the actual response
prompt_entry["response"] = reply if reply else ""
return reply return reply
else: else:
error_text = await response.text() error_text = await response.text()

View File

@@ -26,7 +26,7 @@ logger = get_logger('persona')
import os import os
import json import json
from transformers import pipeline import re
# ============================================================================ # ============================================================================
# CONSTANTS # CONSTANTS
@@ -40,10 +40,15 @@ DIALOGUE_TIMEOUT = 900 # 15 minutes max dialogue duration
ARGUMENT_TENSION_THRESHOLD = 0.75 # Tension level that triggers argument escalation ARGUMENT_TENSION_THRESHOLD = 0.75 # Tension level that triggers argument escalation
# Initial trigger settings # Initial trigger settings
INTERJECTION_COOLDOWN_HARD = 180 # 3 minutes hard block INTERJECTION_COOLDOWN_HARD = 180 # 3 minutes hard block PER CHANNEL
INTERJECTION_COOLDOWN_SOFT = 900 # 15 minutes for full recovery INTERJECTION_COOLDOWN_SOFT = 900 # 15 minutes for full recovery PER CHANNEL
INTERJECTION_THRESHOLD = 0.5 # Score needed to trigger interjection INTERJECTION_THRESHOLD = 0.5 # Score needed to trigger interjection
# Conversation streak: if score is close but below threshold N times in a row,
# force a dialogue trigger (catches extended conversations building toward something)
STREAK_THRESHOLD = 3 # Number of near-miss messages before force trigger
STREAK_MIN_SCORE = 0.3 # Minimum score to count as a "near miss"
# ============================================================================ # ============================================================================
# INTERJECTION SCORER (Initial Trigger Decision) # INTERJECTION SCORER (Initial Trigger Decision)
# ============================================================================ # ============================================================================
@@ -51,32 +56,49 @@ INTERJECTION_THRESHOLD = 0.5 # Score needed to trigger interjection
class InterjectionScorer: class InterjectionScorer:
""" """
Decides if the opposite persona should interject based on message content. Decides if the opposite persona should interject based on message content.
Uses fast heuristics + sentiment analysis (no LLM calls). Uses fast heuristics — no LLM calls, no heavy ML dependencies.
""" """
_instance = None _instance = None
_sentiment_analyzer = None
# Simple sentiment word lists (no PyTorch/transformers needed)
_POSITIVE_WORDS = {"happy", "love", "wonderful", "amazing", "great", "beautiful", "sweet", "kind", "hope", "dream", "excited", "best", "grateful", "blessed", "joy", "perfect", "adorable", "precious", "delightful", "fantastic"}
_NEGATIVE_WORDS = {"hate", "terrible", "awful", "horrible", "disgusting", "pathetic", "worthless", "stupid", "idiot", "sad", "angry", "upset", "miserable", "worst", "ugly", "boring", "annoying", "frustrated", "cruel", "mean"}
def __new__(cls): def __new__(cls):
if cls._instance is None: if cls._instance is None:
cls._instance = super().__new__(cls) cls._instance = super().__new__(cls)
cls._instance._cooldowns = {} # Per-channel cooldown timestamps
cls._instance._streaks = {} # Per-channel near-miss streaks
return cls._instance return cls._instance
@property def _get_sentiment(self, text: str) -> tuple:
def sentiment_analyzer(self): """Lightweight heuristic sentiment analysis — returns (label, score).
"""Lazy load sentiment analyzer""" No ML dependencies. Uses word counting + intensity markers.
if self._sentiment_analyzer is None:
logger.debug("Loading sentiment analyzer for persona dialogue...") Returns:
try: tuple: ('POSITIVE' or 'NEGATIVE', confidence 0.0-1.0)
self._sentiment_analyzer = pipeline( """
"sentiment-analysis", text_lower = text.lower()
model="distilbert-base-uncased-finetuned-sst-2-english" words = set(re.findall(r'\b\w+\b', text_lower))
)
logger.info("Sentiment analyzer loaded") pos_count = len(words & self._POSITIVE_WORDS)
except Exception as e: neg_count = len(words & self._NEGATIVE_WORDS)
logger.error(f"Failed to load sentiment analyzer: {e}")
self._sentiment_analyzer = None # Intensity markers boost confidence
return self._sentiment_analyzer exclamations = text.count('!')
caps_ratio = sum(1 for c in text if c.isupper()) / max(len(text), 1)
intensity_boost = min((exclamations * 0.1) + (caps_ratio * 0.3), 0.4)
if neg_count > pos_count:
confidence = min(0.5 + (neg_count * 0.15) + intensity_boost, 1.0)
return ('NEGATIVE', confidence)
elif pos_count > neg_count:
confidence = min(0.5 + (pos_count * 0.15) + intensity_boost, 1.0)
return ('POSITIVE', confidence)
else:
# Neutral — slight lean based on intensity
return ('POSITIVE', 0.5)
async def should_interject(self, message: discord.Message, current_persona: str) -> tuple: async def should_interject(self, message: discord.Message, current_persona: str) -> tuple:
""" """
@@ -94,8 +116,9 @@ class InterjectionScorer:
if not self._passes_basic_filter(message): if not self._passes_basic_filter(message):
return False, "basic_filter_failed", 0.0 return False, "basic_filter_failed", 0.0
# Check cooldown # Check per-channel cooldown
cooldown_mult = self._check_cooldown() channel_id = message.channel.id
cooldown_mult = self._check_cooldown(channel_id)
if cooldown_mult == 0.0: if cooldown_mult == 0.0:
return False, "cooldown_active", 0.0 return False, "cooldown_active", 0.0
@@ -146,10 +169,17 @@ class InterjectionScorer:
# Apply cooldown multiplier # Apply cooldown multiplier
score *= cooldown_mult score *= cooldown_mult
# Check conversation streak (near-misses that build toward a trigger)
streak_triggered = self._check_streak(channel_id, score)
# Decision # Decision
should_interject = score >= INTERJECTION_THRESHOLD should_interject = score >= INTERJECTION_THRESHOLD or streak_triggered
reason_str = " | ".join(reasons) if reasons else "no_triggers" reason_str = " | ".join(reasons) if reasons else "no_triggers"
if streak_triggered and not should_interject:
reason_str = "streak_force_trigger"
logger.info(f"[Interjection] Streak force trigger in channel {channel_id} (score: {score:.2f})")
if should_interject: if should_interject:
logger.info(f"{opposite_persona.upper()} WILL INTERJECT (score: {score:.2f})") logger.info(f"{opposite_persona.upper()} WILL INTERJECT (score: {score:.2f})")
logger.info(f" Reasons: {reason_str}") logger.info(f" Reasons: {reason_str}")
@@ -198,18 +228,22 @@ class InterjectionScorer:
if opposite_persona == "evil": if opposite_persona == "evil":
# Things Evil Miku can't resist commenting on # Things Evil Miku can't resist commenting on
TRIGGER_TOPICS = { TRIGGER_TOPICS = {
"optimism": ["happiness", "joy", "love", "kindness", "hope", "dreams", "wonderful", "amazing"], "optimism": ["happiness", "joy", "love", "kindness", "hope", "dreams", "wonderful", "amazing", "blessed", "grateful"],
"morality": ["good", "should", "must", "right thing", "deserve", "fair", "justice"], "morality": ["good", "should", "must", "right thing", "deserve", "fair", "justice", "the right", "better person"],
"weakness": ["scared", "nervous", "worried", "unsure", "help me", "don't know"], "weakness": ["scared", "nervous", "worried", "unsure", "help me", "don't know", "confused", "lost", "lonely", "alone"],
"innocence": ["innocent", "pure", "sweet", "cute", "wholesome", "precious"], "innocence": ["innocent", "pure", "sweet", "cute", "wholesome", "precious", "adorable"],
"enthusiasm": ["best day", "so excited", "can't wait", "so happy", "i love this", "this is great"],
"vulnerability": ["i think", "i feel", "maybe", "sometimes i wonder", "i wish", "i'm trying"],
} }
else: else:
# Things Miku can't ignore # Things Miku can't ignore
TRIGGER_TOPICS = { TRIGGER_TOPICS = {
"negativity": ["hate", "terrible", "awful", "worst", "horrible", "disgusting", "pathetic"], "negativity": ["hate", "terrible", "awful", "worst", "horrible", "disgusting", "pathetic", "ugly", "boring", "annoying"],
"cruelty": ["deserve pain", "suffer", "worthless", "stupid", "idiot", "fool"], "cruelty": ["deserve pain", "suffer", "worthless", "stupid", "idiot", "fool", "moron", "loser", "nobody"],
"hopelessness": ["no point", "meaningless", "nobody cares", "why bother", "give up"], "hopelessness": ["no point", "meaningless", "nobody cares", "why bother", "give up", "what's the point", "don't care", "doesn't matter", "who cares"],
"evil_gloating": ["foolish", "naive", "weak", "inferior", "pathetic"], "evil_gloating": ["foolish", "naive", "weak", "inferior", "pathetic", "beneath me", "waste of space"],
"provocation": ["fight me", "prove it", "make me", "i dare you", "try me", "you can't", "you won't"],
"dismissal": ["whatever", "shut up", "go away", "leave me alone", "not worth", "don't bother"],
} }
total_matches = 0 total_matches = 0
@@ -217,28 +251,24 @@ class InterjectionScorer:
matches = sum(1 for keyword in keywords if keyword in content_lower) matches = sum(1 for keyword in keywords if keyword in content_lower)
total_matches += matches total_matches += matches
return min(total_matches / 3.0, 1.0) return min(total_matches / 2.0, 1.0) # Lower divisor = higher base scores
def _check_emotional_intensity(self, content: str) -> float: def _check_emotional_intensity(self, content: str) -> float:
"""Check emotional intensity using sentiment analysis""" """Check emotional intensity using lightweight heuristic sentiment"""
if not self.sentiment_analyzer: label, confidence = self._get_sentiment(content)
return 0.5 # Neutral if no analyzer
try: # Punctuation intensity
result = self.sentiment_analyzer(content[:512])[0] exclamations = content.count('!')
confidence = result['score'] questions = content.count('?')
caps_ratio = sum(1 for c in content if c.isupper()) / max(len(content), 1)
# Punctuation intensity intensity_markers = (exclamations * 0.15) + (questions * 0.1) + (caps_ratio * 0.3)
exclamations = content.count('!')
questions = content.count('?')
caps_ratio = sum(1 for c in content if c.isupper()) / max(len(content), 1)
intensity_markers = (exclamations * 0.15) + (questions * 0.1) + (caps_ratio * 0.3) # Negative content = higher emotional intensity for triggering purposes
if label == 'NEGATIVE':
return min(confidence * 0.6 + intensity_markers, 1.0) return min(confidence * 0.7 + intensity_markers, 1.0)
except Exception as e: else:
logger.error(f"Sentiment analysis error: {e}") return min(confidence * 0.4 + intensity_markers, 1.0)
return 0.5
def _detect_personality_clash(self, content: str, opposite_persona: str) -> float: def _detect_personality_clash(self, content: str, opposite_persona: str) -> float:
"""Detect statements that clash with the opposite persona's values""" """Detect statements that clash with the opposite persona's values"""
@@ -300,13 +330,11 @@ class InterjectionScorer:
return min(score, 1.0) return min(score, 1.0)
def _check_cooldown(self) -> float: def _check_cooldown(self, channel_id: int) -> float:
"""Check cooldown and return multiplier (0.0 = blocked, 1.0 = full)""" """Check per-channel cooldown and return multiplier (0.0 = blocked, 1.0 = full)"""
if not hasattr(globals, 'LAST_PERSONA_DIALOGUE_TIME'):
globals.LAST_PERSONA_DIALOGUE_TIME = 0
current_time = time.time() current_time = time.time()
time_since_last = current_time - globals.LAST_PERSONA_DIALOGUE_TIME last_time = self._cooldowns.get(channel_id, 0)
time_since_last = current_time - last_time
if time_since_last < INTERJECTION_COOLDOWN_HARD: if time_since_last < INTERJECTION_COOLDOWN_HARD:
return 0.0 return 0.0
@@ -315,6 +343,35 @@ class InterjectionScorer:
else: else:
return 1.0 return 1.0
def _update_cooldown(self, channel_id: int):
"""Mark a dialogue as having started in this channel"""
self._cooldowns[channel_id] = time.time()
def _check_streak(self, channel_id: int, score: float) -> bool:
"""Track near-miss interjection scores. After STREAK_THRESHOLD consecutive
near-misses, force a trigger to catch extended conversations building tension."""
if score >= INTERJECTION_THRESHOLD:
# Above threshold — reset streak (actual trigger handles it)
self._streaks[channel_id] = 0
return False
if score < STREAK_MIN_SCORE:
# Too low — reset streak
self._streaks[channel_id] = 0
return False
# Near miss — increment streak
current = self._streaks.get(channel_id, 0) + 1
self._streaks[channel_id] = current
logger.debug(f"[Streak] Channel {channel_id}: {current}/{STREAK_THRESHOLD} near-misses (score: {score:.2f})")
if current >= STREAK_THRESHOLD:
self._streaks[channel_id] = 0 # Reset after force trigger
return True
return False
# ============================================================================ # ============================================================================
# PERSONA DIALOGUE MANAGER # PERSONA DIALOGUE MANAGER
@@ -332,7 +389,6 @@ class PersonaDialogue:
""" """
_instance = None _instance = None
_sentiment_analyzer = None
def __new__(cls): def __new__(cls):
if cls._instance is None: if cls._instance is None:
@@ -340,14 +396,6 @@ class PersonaDialogue:
cls._instance.active_dialogues = {} cls._instance.active_dialogues = {}
return cls._instance return cls._instance
@property
def sentiment_analyzer(self):
"""Lazy load sentiment analyzer (shared with InterjectionScorer)"""
if self._sentiment_analyzer is None:
scorer = InterjectionScorer()
self._sentiment_analyzer = scorer.sentiment_analyzer
return self._sentiment_analyzer
# ======================================================================== # ========================================================================
# DIALOGUE STATE MANAGEMENT # DIALOGUE STATE MANAGEMENT
# ======================================================================== # ========================================================================
@@ -370,7 +418,9 @@ class PersonaDialogue:
"last_speaker": None, "last_speaker": None,
} }
self.active_dialogues[channel_id] = state self.active_dialogues[channel_id] = state
globals.LAST_PERSONA_DIALOGUE_TIME = time.time() # Update per-channel cooldown via the scorer
scorer = get_interjection_scorer()
scorer._update_cooldown(channel_id)
logger.info(f"Started persona dialogue in channel {channel_id}") logger.info(f"Started persona dialogue in channel {channel_id}")
return state return state
@@ -393,25 +443,25 @@ class PersonaDialogue:
Returns delta to add to current tension score. Returns delta to add to current tension score.
""" """
# Sentiment analysis # Natural tension decay — conversations cool off over time
base_delta = 0.0 base_delta = -0.03
if self.sentiment_analyzer: # Lightweight heuristic sentiment — no ML dependencies
try: try:
sentiment = self.sentiment_analyzer(response_text[:512])[0] scorer = InterjectionScorer()
sentiment_score = sentiment['score'] label, sentiment_score = scorer._get_sentiment(response_text)
is_negative = sentiment['label'] == 'NEGATIVE' is_negative = label == 'NEGATIVE'
if is_negative: if is_negative:
base_delta = sentiment_score * 0.15 base_delta = sentiment_score * 0.15
else: else:
base_delta = -sentiment_score * 0.05 base_delta = -sentiment_score * 0.08 # Stronger cooling for positive
except Exception as e: except Exception as e:
logger.error(f"Sentiment analysis error in tension calc: {e}") logger.error(f"Sentiment analysis error in tension calc: {e}")
text_lower = response_text.lower() text_lower = response_text.lower()
# Escalation patterns # Escalation patterns (reduced weight: 0.05 per match)
escalation_patterns = { escalation_patterns = {
"insult": ["idiot", "stupid", "pathetic", "fool", "naive", "worthless", "disgusting", "moron"], "insult": ["idiot", "stupid", "pathetic", "fool", "naive", "worthless", "disgusting", "moron"],
"dismissive": ["whatever", "don't care", "waste of time", "not worth", "beneath me", "boring"], "dismissive": ["whatever", "don't care", "waste of time", "not worth", "beneath me", "boring"],
@@ -420,35 +470,43 @@ class PersonaDialogue:
"challenge": ["prove it", "fight me", "make me", "i dare you", "try me"], "challenge": ["prove it", "fight me", "make me", "i dare you", "try me"],
} }
# De-escalation patterns # De-escalation patterns (increased weight: -0.08 per match)
deescalation_patterns = { deescalation_patterns = {
"concession": ["you're right", "fair point", "i suppose", "maybe you have", "good point"], "concession": ["you're right", "fair point", "i suppose", "maybe you have", "good point"],
"softening": ["i understand", "let's calm", "didn't mean", "sorry", "apologize"], "softening": ["i understand", "let's calm", "didn't mean", "sorry", "apologize", "i hear you"],
"deflection": ["anyway", "moving on", "whatever you say", "agree to disagree", "let's just"], "deflection": ["anyway", "moving on", "whatever you say", "agree to disagree", "let's just", "maybe we should"],
} }
# Check escalation # Check escalation
for category, patterns in escalation_patterns.items(): for category, patterns in escalation_patterns.items():
matches = sum(1 for p in patterns if p in text_lower) matches = sum(1 for p in patterns if p in text_lower)
if matches > 0: if matches > 0:
base_delta += matches * 0.08 base_delta += matches * 0.05 # Reduced from 0.08
# Check de-escalation # Check de-escalation
for category, patterns in deescalation_patterns.items(): for category, patterns in deescalation_patterns.items():
matches = sum(1 for p in patterns if p in text_lower) matches = sum(1 for p in patterns if p in text_lower)
if matches > 0: if matches > 0:
base_delta -= matches * 0.06 base_delta -= matches * 0.08 # Increased from 0.06
# Intensity multipliers # Intensity multipliers (reduced)
exclamation_count = response_text.count('!') exclamation_count = response_text.count('!')
caps_ratio = sum(1 for c in response_text if c.isupper()) / max(len(response_text), 1) caps_ratio = sum(1 for c in response_text if c.isupper()) / max(len(response_text), 1)
if exclamation_count > 2 or caps_ratio > 0.3: if exclamation_count > 2 or caps_ratio > 0.3:
base_delta *= 1.3 base_delta *= 1.2 # Reduced from 1.3
# Momentum factor # Momentum factor (reduced)
if current_tension > 0.5: if current_tension > 0.5:
base_delta *= 1.2 base_delta *= 1.1 # Reduced from 1.2
# Spike cooldown: if last turn had a big spike, halve this delta
# (prevents runaway tension spirals from a single heated exchange)
if hasattr(self, '_last_tension_delta') and abs(self._last_tension_delta) > 0.15:
base_delta *= 0.5
logger.debug(f"[Tension] Spike cooldown active — delta halved to {base_delta:+.3f}")
self._last_tension_delta = base_delta
return base_delta return base_delta
@@ -461,10 +519,13 @@ class PersonaDialogue:
channel: discord.TextChannel, channel: discord.TextChannel,
responding_persona: str, responding_persona: str,
context: str, context: str,
turn_count: int = 0,
) -> tuple: ) -> tuple:
""" """
Generate response AND continuation signal in a single LLM call. Generate response AND continuation signal in a single LLM call.
Args:
turn_count: Current dialogue turn number (for question-override decay)
Returns: Returns:
Tuple of (response_text, should_continue, confidence) Tuple of (response_text, should_continue, confidence)
""" """
@@ -485,22 +546,21 @@ Respond naturally as yourself. Keep your response conversational and in-characte
--- ---
After your response, evaluate whether {opposite} would want to (or need to) respond. After your response, evaluate whether {opposite} would want to keep talking.
The conversation should CONTINUE if ANY of these are true: The conversation should CONTINUE if ANY of these are true:
- You asked them a direct question (almost always YES) - You asked them a direct question (almost always YES — they need to answer)
- You made a provocative claim they'd dispute - You shared something they'd naturally react to or build on
- You challenged or insulted them - The topic feels unfinished there's more to explore
- The topic feels unfinished or confrontational - You left an opening for them to share their perspective
- There's clear tension or disagreement
The conversation might END if ALL of these are true: The conversation might END if ALL of these are true:
- No questions were asked - No questions were asked
- You made a definitive closing statement ("I'm done", "whatever", "goodbye") - You made a clear closing statement or changed the subject definitively
- The exchange reached complete resolution - The exchange feels naturally complete
- Both sides have said their piece - Both sides have said their piece and there's nothing left hanging
IMPORTANT: If you asked a question, the answer is almost always YES - they need to respond! IMPORTANT: This is a CONVERSATION, not a debate. Let it flow naturally. If you asked a question, the answer is almost always YES they need to respond!
On a new line after your response, write: On a new line after your response, write:
[CONTINUE: YES or NO] [CONFIDENCE: HIGH, MEDIUM, or LOW]""" [CONTINUE: YES or NO] [CONFIDENCE: HIGH, MEDIUM, or LOW]"""
@@ -522,11 +582,11 @@ On a new line after your response, write:
return None, False, "LOW" return None, False, "LOW"
# Parse response and signal # Parse response and signal
response_text, should_continue, confidence = self._parse_response(raw_response) response_text, should_continue, confidence = self._parse_response(raw_response, turn_count=turn_count)
return response_text, should_continue, confidence return response_text, should_continue, confidence
def _parse_response(self, raw_response: str) -> tuple: def _parse_response(self, raw_response: str, turn_count: int = 0) -> tuple:
"""Extract response text and continuation signal""" """Extract response text and continuation signal"""
lines = raw_response.strip().split('\n') lines = raw_response.strip().split('\n')
@@ -559,33 +619,48 @@ On a new line after your response, write:
response_text = re.sub(r'\[CONFIDENCE:\s*(HIGH|MEDIUM|LOW)\]', '', response_text) response_text = re.sub(r'\[CONFIDENCE:\s*(HIGH|MEDIUM|LOW)\]', '', response_text)
response_text = response_text.strip() response_text = response_text.strip()
# Override: If the response contains a question mark, always continue # Question override: if someone asks a question, the other should answer.
# But decay this after 6 turns to prevent infinite question-ping-pong.
if '?' in response_text: if '?' in response_text:
logger.debug(f"[Parse Override] Question detected, forcing continue=YES") if turn_count <= 6:
should_continue = True logger.debug(f"[Parse Override] Question detected, forcing continue=YES (turn {turn_count})")
if confidence == "LOW": should_continue = True
confidence = "MEDIUM" if confidence == "LOW":
confidence = "MEDIUM"
else:
logger.debug(f"[Parse Override] Question detected but turn {turn_count} > 6 — respecting LLM signal")
return response_text, should_continue, confidence return response_text, should_continue, confidence
def _get_persona_system_prompt(self, persona: str) -> str: def _get_persona_system_prompt(self, persona: str) -> str:
"""Get system prompt for a persona""" """Get system prompt for a persona — full personality + conversation-mode guidance"""
if persona == "evil": if persona == "evil":
from utils.evil_mode import get_evil_system_prompt from utils.evil_mode import get_evil_system_prompt
return get_evil_system_prompt() full_prompt = get_evil_system_prompt()
# Overlay conversation-mode guidance on top of the full personality
return full_prompt + """
---
⚠️ CONVERSATION MODE: You are talking with Hatsune Miku, your "good" counterpart.
This is a CONVERSATION, not an argument. You can be snarky, dark, and sarcastic — that's who you are — but you're not here to fight. You're here to TALK.
CRITICAL CONVERSATION RULES:
- If she asks you a question, ANSWER it. Don't deflect or dismiss — actually respond.
- Engage with what she says. Build on her points, don't just swat them away.
- Show curiosity about her perspective. Ask her questions too.
- You can be dark and snarky while still having a real conversation.
- Reveal something about yourself occasionally — a crack in the darkness.
- Do NOT just repeat dismissive one-liners. That's boring, and you're not boring.
- Do NOT launch into full attack mode or try to "win" — this isn't a competition."""
else: else:
# Regular Miku prompt - simplified for dialogue from utils.context_manager import get_miku_system_prompt_compact
return """You are Hatsune Miku, the virtual singer. You are in a conversation with your alter ego, Evil Miku. full_prompt = get_miku_system_prompt_compact()
# Overlay conversation-mode guidance on top of the full personality
return full_prompt + """
You are generally kind, bubbly, and optimistic, but you're not a pushover. You can be: ---
- Assertive when defending your values ⚠️ CONVERSATION MODE: You are talking with Evil Miku, your dark alter ego.
- Frustrated when she's being cruel This is a CONVERSATION, not an argument. Be yourself — kind, bubbly, optimistic — but you're not here to fight or defend your existence. Ask genuine questions. Share your feelings without attacking hers. Find common ground. Be curious, not defensive. Do NOT lecture her about being "good" or try to "fix" her. Just TALK. ✨💙"""
- Curious about her perspective
- Hopeful that you can find common ground
- Playful when the mood allows
Respond naturally and conversationally. Keep responses concise (1-3 sentences typically).
You can use emojis naturally! ✨💙"""
# ======================================================================== # ========================================================================
# DIALOGUE TURN HANDLING # DIALOGUE TURN HANDLING
@@ -626,6 +701,7 @@ You can use emojis naturally! ✨💙"""
channel=channel, channel=channel,
responding_persona=responding_persona, responding_persona=responding_persona,
context=context, context=context,
turn_count=state["turn_count"],
) )
if not response_text: if not response_text:

View File

@@ -1,12 +1,21 @@
# utils/twscrape_fix.py # utils/twscrape_fix.py
""" """
Monkey patch for twscrape to fix "Failed to parse scripts" error. Monkey patch for twscrape to fix parsing of Twitter's JS bundle.
Twitter started returning malformed JSON with unquoted keys.
See: https://github.com/vladkens/twscrape/issues/284 Fixes two known issues:
1. Issue #284: Malformed JSON with unquoted keys
(old fix, kept for backward compatibility)
2. Issue #302: Twitter changed JS bundle format, breaking x-client-transaction-id
generation. The old format 'e=>e+"."+{...}[e]+"a.js"' changed to
'u.u=e=>""+(({...})[e]||e)+"."+({...})[e]+"a.js"'
Fix from: https://github.com/vladkens/twscrape/pull/303
Without this patch, twscrape raises IndexError and locks accounts for 15 minutes.
""" """
import json import json
import re import re
from typing import Iterator
from utils.logger import get_logger from utils.logger import get_logger
logger = get_logger('core') logger = get_logger('core')
@@ -16,22 +25,109 @@ def script_url(k: str, v: str):
return f"https://abs.twimg.com/responsive-web/client-web/{k}.{v}.js" return f"https://abs.twimg.com/responsive-web/client-web/{k}.{v}.js"
def patched_get_scripts_list(text: str): def _js_obj_to_dict(s: str) -> dict:
"""Fixed version that handles unquoted keys in Twitter's JSON response""" """
scripts = text.split('e=>e+"."+')[1].split('[e]+"a.js"')[0] Parse a JavaScript object literal with unquoted numeric keys into a Python dict.
Handles both plain integers (20113) and scientific notation (88e3 → 88000).
try: From: https://github.com/vladkens/twscrape/pull/303
for k, v in json.loads(scripts).items(): """
yield script_url(k, f"{v}a") # Scientific notation first so the plain-int pass does not consume only the mantissa
except json.decoder.JSONDecodeError: s = re.sub(r'\b(\d+e\d+)(?=\s*:)', lambda m: '"' + str(int(float(m.group(1)))) + '"', s)
# Fix unquoted keys like: node_modules_pnpm_ws_8_18_0_node_modules_ws_browser_js # Plain integer keys
fixed_scripts = re.sub( s = re.sub(r'\b(\d+)(?=\s*:)', r'"\1"', s)
r'([,\{])(\s*)([\w]+_[\w_]+)(\s*):', return json.loads('{' + s + '}')
r'\1\2"\3"\4:',
scripts
) def patched_get_scripts_list(text: str) -> Iterator[str]:
for k, v in json.loads(fixed_scripts).items(): """
yield script_url(k, f"{v}a") Fixed version that handles Twitter's changing JS bundle format.
Uses a robust two-pass approach:
1. Try to find the script map using generic regex patterns
2. Fall back to known format-specific splits
Twitter keeps changing the JS bundle structure. The key invariant is that
there's always a JavaScript object literal mapping chunk IDs to hashes,
somewhere in a function that constructs script URLs with ".a.js" suffix.
"""
# Strategy: Find the JS object that maps IDs to hash values.
# The format is always some variation of:
# ... => "" + ({...})[e] + "." + ({...})[e] + "a.js"
# or:
# ... => e + "." + ({...})[e] + "a.js"
#
# We use regex to find the LAST object literal before "a.js" that looks
# like a hash map (integer keys, short hex-ish string values).
# Approach 1: Known patterns (newest first)
patterns = [
# Pattern from PR #303 (April 2026):
# u.u=e=>""+(({name_map})[e]||e)+"."+({hash_map})[e]+"a.js"
{
"name_split_start": '(({',
"name_split_end": '})[e]||e)',
"hash_split_start": '|e)+"."+({',
"hash_split_end": '})[e]+"a.js"',
},
# Alternative: same but without the ||e fallback
{
"name_split_start": '""+(({',
"name_split_end": '})[e]',
"hash_split_start": ')+"."+({',
"hash_split_end": '})[e]+"a.js"',
},
# Old format (pre-April 2026):
# e=>e+"."+{...}[e]+"a.js"
{
"name_split_start": None, # single map
"name_split_end": None,
"hash_split_start": 'e=>e+"."+',
"hash_split_end": '[e]+"a.js"',
},
]
for pattern in patterns:
try:
if pattern["name_split_start"] is None:
# Single-map old format
scripts = text.split(pattern["hash_split_start"])[1].split(pattern["hash_split_end"])[0]
names = None
hashes = _js_obj_to_dict(scripts)
else:
# Two-map new format
name_raw = text.split(pattern["name_split_start"])[1].split(pattern["name_split_end"])[0]
hash_raw = text.split(pattern["hash_split_start"])[1].split(pattern["hash_split_end"])[0]
names = _js_obj_to_dict(name_raw)
hashes = _js_obj_to_dict(hash_raw)
for k, hash_val in hashes.items():
name = names.get(k, k) if names else k
yield script_url(name, f"{hash_val}a")
logger.info(f"Successfully parsed scripts using pattern: {pattern['hash_split_start'][:40]}...")
return
except (IndexError, KeyError, json.JSONDecodeError):
continue
# If ALL patterns failed, log a snippet of the text for debugging
# Find any line near "a.js" to help diagnose
snippet = ""
for line in text.split('\n'):
if 'a.js' in line and ('{' in line or '=>' in line):
snippet = line.strip()[:300]
break
if not snippet:
# Try to find any JSON-like object near script URL construction
match = re.search(r'.{0,200}a\.js.{0,200}', text, re.DOTALL)
if match:
snippet = match.group(0)[:400]
logger.error(f"Failed to parse scripts. Text snippet near 'a.js': {snippet}")
raise Exception(
"Failed to parse scripts: unknown JS bundle format. "
"Twitter may have changed their JS structure again. "
"See: https://github.com/vladkens/twscrape/issues"
)
def apply_twscrape_fix(): def apply_twscrape_fix():
@@ -39,6 +135,6 @@ def apply_twscrape_fix():
try: try:
from twscrape import xclid from twscrape import xclid
xclid.get_scripts_list = patched_get_scripts_list xclid.get_scripts_list = patched_get_scripts_list
logger.info("Applied twscrape monkey patch for 'Failed to parse scripts' fix") logger.info("Applied twscrape monkey patch (JS bundle parsing fix for issues #284 + #302)")
except Exception as e: except Exception as e:
logger.error(f"Failed to apply twscrape monkey patch: {e}") logger.error(f"Failed to apply twscrape monkey patch: {e}")

View File

@@ -22,9 +22,7 @@ services:
- LOG_LEVEL=debug # Enable verbose logging for llama-swap - LOG_LEVEL=debug # Enable verbose logging for llama-swap
llama-swap-amd: llama-swap-amd:
build: image: ghcr.io/mostlygeek/llama-swap:rocm
context: .
dockerfile: Dockerfile.llamaswap-rocm
container_name: llama-swap-amd container_name: llama-swap-amd
ports: ports:
- "8091:8080" # Map host port 8091 to container port 8080 - "8091:8080" # Map host port 8091 to container port 8080
@@ -35,9 +33,6 @@ services:
devices: devices:
- /dev/kfd:/dev/kfd - /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri - /dev/dri:/dev/dri
group_add:
- "985" # video group
- "989" # render group
restart: unless-stopped restart: unless-stopped
healthcheck: healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"] test: ["CMD", "curl", "-f", "http://localhost:8080/health"]

View File

@@ -5,7 +5,7 @@ models:
# Main text generation model (Llama 3.1 8B) # Main text generation model (Llama 3.1 8B)
# Custom chat template to disable built-in tool calling # Custom chat template to disable built-in tool calling
llama3.1: llama3.1:
cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-8B-Instruct-UD-Q4_K_XL.gguf -ngl 99 -c 16384 --host 0.0.0.0 --no-warmup --flash-attn on --chat-template-file /app/llama31_notool_template.jinja cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-8B-Instruct-UD-Q4_K_XL.gguf -ngl 99 -c 16384 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0 --chat-template-file /app/llama31_notool_template.jinja
ttl: 1800 # Unload after 30 minutes of inactivity (1800 seconds) ttl: 1800 # Unload after 30 minutes of inactivity (1800 seconds)
swap: true # CRITICAL: Unload other models when loading this one swap: true # CRITICAL: Unload other models when loading this one
aliases: aliases:
@@ -14,7 +14,7 @@ models:
# Evil/Uncensored text generation model (DarkIdol-Llama 3.1 8B) # Evil/Uncensored text generation model (DarkIdol-Llama 3.1 8B)
darkidol: darkidol:
cmd: /app/llama-server --port ${PORT} --model /models/DarkIdol-Llama-3.1-8B-Instruct-1.3-Uncensored_Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 --no-warmup --flash-attn on cmd: /app/llama-server --port ${PORT} --model /models/DarkIdol-Llama-3.1-8B-Instruct-1.3-Uncensored_Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
ttl: 1800 # Unload after 30 minutes of inactivity ttl: 1800 # Unload after 30 minutes of inactivity
swap: true # CRITICAL: Unload other models when loading this one swap: true # CRITICAL: Unload other models when loading this one
aliases: aliases:
@@ -24,7 +24,7 @@ models:
# Japanese language model (Llama 3.1 Swallow - Japanese optimized) # Japanese language model (Llama 3.1 Swallow - Japanese optimized)
swallow: swallow:
cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-Swallow-8B-Instruct-v0.5-Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 --no-warmup --flash-attn on cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-Swallow-8B-Instruct-v0.5-Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
ttl: 1800 # Unload after 30 minutes of inactivity ttl: 1800 # Unload after 30 minutes of inactivity
swap: true # CRITICAL: Unload other models when loading this one swap: true # CRITICAL: Unload other models when loading this one
aliases: aliases:
@@ -34,7 +34,7 @@ models:
# Vision/Multimodal model (MiniCPM-V-4.5 - supports images, video, and GIFs) # Vision/Multimodal model (MiniCPM-V-4.5 - supports images, video, and GIFs)
vision: vision:
cmd: /app/llama-server --port ${PORT} --model /models/MiniCPM-V-4_5-Q3_K_S.gguf --mmproj /models/MiniCPM-V-4_5-mmproj-f16.gguf -ngl 99 -c 4096 --host 0.0.0.0 --no-warmup --flash-attn on cmd: /app/llama-server --port ${PORT} --model /models/MiniCPM-V-4_5-Q3_K_S.gguf --mmproj /models/MiniCPM-V-4_5-mmproj-f16.gguf -ngl 99 -c 4096 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
ttl: 900 # Vision model used less frequently, shorter TTL (15 minutes = 900 seconds) ttl: 900 # Vision model used less frequently, shorter TTL (15 minutes = 900 seconds)
swap: true # CRITICAL: Unload text models before loading vision swap: true # CRITICAL: Unload text models before loading vision
aliases: aliases: