Compare commits

...

19 Commits

Author SHA1 Message Date
9eb081efb1 llama-swap: use pre-built images (:cuda, :rocm) with GPU-specific flags
- Drop custom Dockerfiles; docker-compose uses ghcr.io pre-built images
  which ship llama-swap + llama-server with no pinned versions (always latest)
- NVIDIA GTX 1660 (6GB): add -fit off --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
  to fix OOM segfault with new llama.cpp b9014's GPU-side KV cache default
- AMD RX 6800 (16GB): flags unchanged; KV cache stays on GPU for max speed
- Both running llama-swap v211 + llama.cpp b9014 (2026-05-05)
2026-05-05 16:53:34 +03:00
4e28236b06 fix: preserve collapsible subsection state across polling re-renders
- Use stable section IDs (without Date.now()) so collapse state can be
  tracked across re-renders
- Snapshot collapsed state before innerHTML replacement, restore after
- Prevents the 10s polling from expanding all subsections every time
2026-05-02 16:17:26 +03:00
c5e49c73df fix: add cache-busting to prevent stale JS/CSS from breaking the UI
- Added ?v=20260502 query param to all <script src=...> and <link> tags
- Added Cache-Control: no-cache, no-store, must-revalidate to index route
- Added <meta> cache-control tags in HTML head for extra coverage
- This ensures the browser always fetches fresh HTML/JS/CSS after deploy,
  preventing the old loadLastPrompt() from running against new HTML
  (which would crash since #prompt-cat-info no longer exists)
2026-05-02 16:08:47 +03:00
393921e524 fix: add min-height to #prompt-display and placeholder text in clearPromptDisplay()
The empty #prompt-display div collapsed to 0 height, making it appear
'gone'. Added min-height: 3rem and a 'No prompt selected.' placeholder
that clearPromptDisplay() now sets via innerHTML.
2026-05-02 15:55:19 +03:00
2dd32d0ef1 fix: move <pre> outside #prompt-display to prevent innerHTML from destroying it
The renderPromptEntry() function sets innerHTML on #prompt-display, which
was wiping out the child <pre id="last-prompt"> element. This caused
copyPromptToClipboard() to fail silently and the display to appear empty.

Fix: keep <pre> as a hidden sibling outside #prompt-display, used only as
a text buffer for the copy function.
2026-05-02 15:45:54 +03:00
a980b90c0a fix: escape content in buildCollapsibleSection, avoid double-escaping response 2026-05-02 15:27:18 +03:00
6b922d84ae frontend: rewrite Last Prompt as Prompt History viewer
- status.js: replace loadLastPrompt() with loadPromptHistory() + helpers
  - fetch /prompts with optional source filter, populate dropdown
  - selectPromptEntry() renders metadata bar + collapsible subsections
  - parsePromptSections() splits full_prompt into System/Context/Conversation
  - buildCollapsibleSection() with toggle arrows (▼/▶)
  - copyPromptToClipboard() copies raw text
  - toggleMiddleTruncation() truncates response from middle
  - togglePromptHistoryCollapse() collapses entire section
  - legacy loadLastPrompt() delegates to loadPromptHistory()
- core.js: add promptInterval to polling (10s), visibility resume
  - update switchPromptSource() for 'all' filter + new button IDs
  - update initPromptSourceToggle() default to 'all'
  - declare promptInterval variable
2026-05-02 15:25:05 +03:00
f33e2afdf7 frontend: new Prompt History section HTML + CSS
- Replace single <pre> Last Prompt with rich Prompt History viewer
- Add source filter buttons (All/Cat/Fallback), history dropdown selector
- Add metadata bar, copy-to-clipboard button, middle-truncation toggle
- Add collapsible section CSS classes for expandable subsections
2026-05-02 15:19:10 +03:00
87de8f8b3a backend: replace LAST_FULL_PROMPT/LAST_CAT_INTERACTION with unified PROMPT_HISTORY deque
- globals.py: add collections.deque(maxlen=10) PROMPT_HISTORY with _prompt_id_counter
- globals.py: add legacy accessor functions _get_last_fallback_prompt() and _get_last_cat_interaction()
- bot.py: append to PROMPT_HISTORY instead of setting LAST_CAT_INTERACTION, remove 500-char truncation, add guild/channel/model fields
- image_handling.py: same pattern for Cat media responses
- llm.py: append fallback prompts to PROMPT_HISTORY with response filled after LLM reply
- routes/core.py: new GET /prompts and GET /prompts/{id} endpoints, legacy /prompt and /prompt/cat use accessor functions
2026-05-02 15:17:15 +03:00
2d0c80b7ef fix: prevent infinite dialogue loops + make Evil Miku actually engage
- Question override now decays after 6 turns: after turn 6, the LLM's own
  [CONTINUE] signal is respected even when questions are asked. This prevents
  infinite question-ping-pong where both personas keep asking questions.
- _parse_response now accepts turn_count parameter; generate_response_with_continuation
  and handle_dialogue_turn pass it through.
- Rewrote Evil Miku's conversation-mode overlay with explicit CRITICAL RULES:
  ANSWER questions, engage with what she says, ask questions too, don't just
  repeat dismissive one-liners. The old overlay said 'be playful-cruel' but
  didn't actually tell her to participate in the conversation.
2026-04-30 15:39:53 +03:00
17842f24d4 fix: remove broken personality snippet system — now redundant
The snippet loader used wrong file paths (/app/cat/data/ instead of persona/)
causing 'Loaded 0 personality snippets' for both personas. Since the previous
commit now injects full system prompts (get_miku_system_prompt_compact and
get_evil_system_prompt) into every argument exchange, the snippet system is
redundant — all lore/lyrics/personality are already provided by the system prompts.
2026-04-30 15:16:43 +03:00
4e064ad89b fix: import is_persona_dialogue_active from correct module
Was importing from utils.bipolar_mode instead of utils.persona_dialogue
2026-04-30 15:10:13 +03:00
97c7133fdc fix: both personas now use full system prompts in arguments and dialogues
Created get_miku_system_prompt() and get_miku_system_prompt_compact() in
context_manager.py — mirrors get_evil_system_prompt() so both personas have
equally rich prompts with lore, lyrics, mood integration, and personality.

Previously only Evil Miku had a proper system prompt function. Regular Miku's
arguments and dialogues used a bare-bones hardcoded prompt with no lore/lyrics
— making arguments feel flat compared to normal conversation.

Changes:
- context_manager.py: added get_miku_system_prompt() (full) and
  get_miku_system_prompt_compact() (lore+personality, no lyrics for tokens)
- bipolar_mode.py: both argument prompt functions now accept system_prompt
  param; run_argument() builds miku_system and evil_system once and passes
  them to every exchange
- persona_dialogue.py: dialogue prompts now use get_miku_system_prompt_compact()
  instead of hardcoded stub, matching Evil Miku's full prompt approach
- Removed redundant hardcoded personality text from argument prompts since
  the system prompts now provide it
2026-04-30 15:07:55 +03:00
7d5881ebe7 fix: inject argument topic into EVERY exchange, not just the first message
The topic was only being injected into the initial breakthrough message via
get_argument_start_prompt(). After that, every subsequent exchange called
get_miku_argument_prompt() / get_evil_argument_prompt() which had no concept
of the topic — so both personas forgot what they were arguing about after the
first exchange and reverted to generic identity-crisis arguments.

Fix: added argument_topic parameter to both persona prompt functions and inject
it as a bold ARGUMENT THEME reminder in every single exchange. The topic block
explicitly tells the LLM to stay on-topic and not drift into generic territory.
2026-04-30 12:57:48 +03:00
e6c818f647 fix: merge context + topic into single field — one clear purpose
- Removed separate 'topic' field from BipolarTriggerRequest model
- Removed topic parameter from force_trigger_argument, force_trigger_argument_from_message_id, and run_argument
- trigger_context now doubles as the argument theme: if provided by user, it becomes the topic;
  if blank, a random topic is selected from the rotation pool
- Web UI: replaced two confusing fields (Context + Topic) with one clear field labeled
  'What should they argue about? (optional)' with a plain-English description
- JS: removed topic field reference, context.trim() ensures empty strings aren't sent
2026-04-30 12:30:49 +03:00
846557fa96 feat: add optional custom argument topic override via Web UI
- Added optional 'topic' field to BipolarTriggerRequest model
- Added topic parameter to force_trigger_argument and force_trigger_argument_from_message_id
- Updated run_argument to accept optional custom topic (None=random, ''=no topic, str=custom)
- Added topic input field to Web UI trigger-argument section
- Updated JS to send topic in API request body
- Custom topics bypass the random rotation system, allowing manual theme control
2026-04-30 12:07:28 +03:00
98fca53066 Phase 3: Polish & immersion — mood-aware arguments, personality snippets, parting shots
- Added mood-specific argument behavioral guidance: 9 moods for Evil Miku, 9 for Miku
  Each mood changes argument style (e.g. cunning=chess moves, manic=chaotic, bubbly=playful deflections)
- Added personality snippet injection from Cat plugin lore/lyrics data files
  40% chance per prompt to include a random lore/lyric snippet for unique material
- Added parting shot feature: 20% chance the LOSER gets a bitter final line before the winner's victory
  Adds dramatic tension and prevents clean-win monotony
- Mood guidance and personality flavor injected into both argument prompts
2026-04-30 11:50:37 +03:00
a52b36135f Phase 2: Fix triggers & dialogue — per-channel cooldowns, tension rebalance, user-message triggers
- Changed cooldown from global (ALL channels blocked) to per-channel dict keyed by channel_id
- Added conversation streak tracker: 3 near-miss interjection scores in a row force a dialogue trigger
- Expanded topic relevance keywords: added enthusiasm/vulnerability for Evil Miku, provocation/dismissal for Miku
- Lowered keyword divisor from /3.0 to /2.0 for higher base trigger scores
- Tension rebalance: added natural decay (-0.03/turn), reduced escalation weight (0.08->0.05), increased de-escalation weight (0.06->0.08)
- Reduced momentum multiplier (1.2->1.1) and intensity multiplier (1.3->1.2)
- Added spike cooldown: if last turn tension delta >0.15, next delta halved (prevents runaway spirals)
- Added user-message interjection check in bot.py on_message() (was only checking bot's own messages)
- Added random 15% argument trigger roll on user messages in normal message flow (was only from autonomous.py)
2026-04-30 11:45:13 +03:00
7a4122fd02 Phase 1: Argument system overhaul — arbiter, memory, topics, stats
- Changed arbiter LLM from llama3.1 to darkidol (uncensored, unbiased)
- Rewrote arbiter criteria to judge debate skill equally
- Added argument history injection (last 6 exchanges) to prevent repetition
- Added dynamic topic rotation system (11 weighted topics) with per-channel history
- Added keyword-based argument stats tracking (wit/composure/impact) fed to arbiter
- Removed hardcoded suggestion lists from prompts
2026-04-30 11:37:33 +03:00
19 changed files with 1239 additions and 363 deletions

View File

@@ -1,13 +0,0 @@
FROM ghcr.io/mostlygeek/llama-swap:cuda
USER root
# Download and install llama-server binary (CUDA version)
# Using the official pre-built binary from llama.cpp releases
ADD --chmod=755 https://github.com/ggml-org/llama.cpp/releases/download/b4183/llama-server-cuda /usr/local/bin/llama-server
# Verify it's executable
RUN llama-server --version || echo "llama-server installed successfully"
USER 1000:1000

View File

@@ -1,68 +0,0 @@
# Multi-stage build for llama-swap with ROCm support
# Now using official llama.cpp ROCm image (PR #18439 merged Dec 29, 2025)
# Stage 1: Build llama-swap UI
FROM node:22-alpine AS ui-builder
WORKDIR /build
# Install git
RUN apk add --no-cache git
# Clone llama-swap
RUN git clone https://github.com/mostlygeek/llama-swap.git
# Build UI (now in ui-svelte directory)
WORKDIR /build/llama-swap/ui-svelte
RUN npm install && npm run build
# Stage 2: Build llama-swap binary
FROM golang:1.23-alpine AS swap-builder
WORKDIR /build
# Install git
RUN apk add --no-cache git
# Copy llama-swap source with built UI
COPY --from=ui-builder /build/llama-swap /build/llama-swap
# Build llama-swap binary
WORKDIR /build/llama-swap
RUN GOTOOLCHAIN=auto go build -o /build/llama-swap-binary .
# Stage 3: Final runtime image using official llama.cpp ROCm image
FROM ghcr.io/ggml-org/llama.cpp:server-rocm
WORKDIR /app
# Copy llama-swap binary from builder
COPY --from=swap-builder /build/llama-swap-binary /app/llama-swap
# Make binaries executable
RUN chmod +x /app/llama-swap
# Add existing ubuntu user (UID 1000) to GPU access groups (using host GIDs)
# GID 187 = render group on host, GID 989 = video/kfd group on host
RUN groupadd -g 187 hostrender && \
groupadd -g 989 hostvideo && \
usermod -aG hostrender,hostvideo ubuntu && \
chown -R ubuntu:ubuntu /app
# Set environment for ROCm (RX 6800 is gfx1030)
ENV HSA_OVERRIDE_GFX_VERSION=10.3.0
ENV ROCM_PATH=/opt/rocm
ENV HIP_VISIBLE_DEVICES=0
USER ubuntu
# Expose port
EXPOSE 8080
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=30s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Override the base image's ENTRYPOINT and run llama-swap
ENTRYPOINT []
CMD ["/app/llama-swap", "-config", "/app/config.yaml", "-listen", "0.0.0.0:8080"]

View File

@@ -203,6 +203,31 @@ async def on_message(message):
if is_persona_dialogue_active(message.channel.id):
return
# Bipolar mode: check if the opposite persona should interject on user messages
# AND roll for random argument trigger (both non-blocking background tasks)
if not isinstance(message.channel, discord.DMChannel) and globals.BIPOLAR_MODE:
try:
from utils.persona_dialogue import check_for_interjection, is_persona_dialogue_active as dialogue_active
from utils.bipolar_mode import maybe_trigger_argument, is_argument_in_progress as arg_in_progress
from utils.task_tracker import create_tracked_task
# Check interjection on user messages (opposite of current active persona)
if not message.author.bot or message.webhook_id:
current_persona = "evil" if globals.EVIL_MODE else "miku"
create_tracked_task(
check_for_interjection(message, current_persona),
task_name="interjection_check_user",
)
# Roll random argument trigger chance (15%) on eligible messages
if not arg_in_progress(message.channel.id) and not dialogue_active(message.channel.id):
create_tracked_task(
maybe_trigger_argument(message.channel, globals.client, "Triggered from conversation flow"),
task_name="random_argument_trigger",
)
except Exception as e:
logger.error(f"Error in bipolar trigger checks: {e}")
if message.content.strip().lower() == "miku, rape this nigga balls" and message.reference:
async with message.channel.typing():
# Get replied-to user
@@ -335,15 +360,24 @@ async def on_message(message):
if globals.EVIL_MODE:
effective_mood = f"EVIL:{getattr(globals, 'EVIL_DM_MOOD', 'evil_neutral')}"
logger.info(f"🐱 Cat response for {author_name} (mood: {effective_mood})")
# Track Cat interaction for Web UI Last Prompt view
# Track Cat interaction in unified prompt history
import datetime
globals.LAST_CAT_INTERACTION = {
globals._prompt_id_counter += 1
guild_name = message.guild.name if message.guild else "DM"
channel_name = message.channel.name if message.guild else "DM"
globals.PROMPT_HISTORY.append({
"id": globals._prompt_id_counter,
"source": "cat",
"full_prompt": cat_full_prompt,
"response": response[:500] if response else "",
"response": response if response else "",
"user": author_name,
"mood": effective_mood,
"guild": guild_name,
"channel": channel_name,
"timestamp": datetime.datetime.now().isoformat(),
}
"model": "Cat LLM",
"response_type": response_type,
})
except Exception as e:
logger.warning(f"🐱 Cat pipeline error, falling back to query_llama: {e}")
response = None

View File

@@ -1,6 +1,7 @@
# globals.py
import os
import discord
from collections import deque
from apscheduler.schedulers.asyncio import AsyncIOScheduler
scheduler = AsyncIOScheduler()
@@ -77,16 +78,25 @@ MIKU_NORMAL_AVATAR_URL = None # Cached CDN URL of the regular Miku pfp (valid e
BOT_USER = None
LAST_FULL_PROMPT = ""
# Unified prompt history (replaces LAST_FULL_PROMPT and LAST_CAT_INTERACTION)
# Each entry: {id, source, full_prompt, response, user, mood, guild, channel,
# timestamp, model, response_type}
PROMPT_HISTORY = deque(maxlen=10)
_prompt_id_counter = 0
# Cheshire Cat last interaction tracking (for Web UI Last Prompt toggle)
LAST_CAT_INTERACTION = {
"full_prompt": "",
"response": "",
"user": "",
"mood": "",
"timestamp": "",
}
# Legacy accessors for backward compatibility (routes, CLI, etc.)
# These are computed properties that read from PROMPT_HISTORY
def _get_last_fallback_prompt():
for entry in reversed(PROMPT_HISTORY):
if entry.get("source") == "fallback":
return entry.get("full_prompt", "")
return ""
def _get_last_cat_interaction():
for entry in reversed(PROMPT_HISTORY):
if entry.get("source") == "cat":
return entry
return {"full_prompt": "", "response": "", "user": "", "mood": "", "timestamp": ""}
# Persona Dialogue System (conversations between Miku and Evil Miku)
LAST_PERSONA_DIALOGUE_TIME = 0 # Timestamp of last dialogue for cooldown

View File

@@ -148,7 +148,7 @@ def trigger_argument(data: BipolarTriggerRequest):
if not channel:
return JSONResponse(status_code=404, content={"status": "error", "message": f"Channel {channel_id} not found"})
# Trigger the argument
# Trigger the argument — context doubles as the argument theme
globals.client.loop.create_task(force_trigger_argument(channel, globals.client, data.context))
return {

View File

@@ -14,7 +14,8 @@ router = APIRouter()
@router.get("/")
def read_index():
return FileResponse("static/index.html")
headers = {"Cache-Control": "no-cache, no-store, must-revalidate"}
return FileResponse("static/index.html", headers=headers)
@router.get("/logs")
@@ -31,18 +32,45 @@ def get_logs():
@router.get("/prompt")
def get_last_prompt():
return {"prompt": globals.LAST_FULL_PROMPT or "No prompt has been issued yet."}
"""Legacy endpoint: returns the most recent fallback prompt (backward compat)."""
prompt_text = globals._get_last_fallback_prompt()
return {"prompt": prompt_text or "No prompt has been issued yet."}
@router.get("/prompt/cat")
def get_last_cat_prompt():
"""Get the last Cheshire Cat interaction (full prompt + response) for Web UI."""
interaction = globals.LAST_CAT_INTERACTION
"""Legacy endpoint: returns the most recent Cat interaction (backward compat)."""
interaction = globals._get_last_cat_interaction()
if not interaction.get("full_prompt"):
return {"full_prompt": "No Cheshire Cat interaction has occurred yet.", "response": "", "user": "", "mood": "", "timestamp": ""}
return {"full_prompt": "No Cheshire Cat interaction has occurred yet.",
"response": "", "user": "", "mood": "", "timestamp": ""}
return interaction
@router.get("/prompts")
def get_prompt_history(source: str = None):
"""
Return the unified prompt history.
Optional query param ?source=cat or ?source=fallback to filter.
"""
history = list(globals.PROMPT_HISTORY)
if source and source in ("cat", "fallback"):
history = [e for e in history if e.get("source") == source]
return {"history": history}
@router.get("/prompts/{prompt_id}")
def get_prompt_by_id(prompt_id: int):
"""Return a single prompt history entry by ID."""
for entry in globals.PROMPT_HISTORY:
if entry.get("id") == prompt_id:
return entry
return JSONResponse(
status_code=404,
content={"status": "error", "message": f"Prompt #{prompt_id} not found"}
)
@router.get("/status")
def status():
# Get per-server mood summary

View File

@@ -45,7 +45,7 @@ class LogFilterUpdateRequest(BaseModel):
class BipolarTriggerRequest(BaseModel):
channel_id: str # String to handle large Discord IDs from JS
message_id: str = None # Optional: starting message ID (string)
context: str = ""
context: str = "" # Optional: argument theme/context — tells them what to argue about
class ManualCropRequest(BaseModel):

View File

@@ -441,6 +441,51 @@ h1, h3 {
color: #ddd;
}
/* Prompt History Section */
#prompt-history-section.collapsed #prompt-history-body {
display: none;
}
#prompt-history-toggle {
user-select: none;
transition: color 0.2s;
}
#prompt-history-toggle:hover {
color: #4CAF50;
}
#prompt-metadata span {
white-space: nowrap;
}
#prompt-metadata .prompt-meta-label {
color: #666;
}
#prompt-metadata .prompt-meta-value {
color: #ccc;
}
#prompt-display pre {
margin: 0;
}
.prompt-subsection-header {
cursor: pointer;
user-select: none;
padding: 0.3rem 0.5rem;
border-radius: 4px;
background: #2a2a2a;
margin: 0.5rem 0 0.25rem 0;
font-size: 0.82rem;
color: #aaa;
transition: background 0.15s;
}
.prompt-subsection-header:hover {
background: #333;
color: #ddd;
}
.prompt-subsection-body.collapsed {
display: none;
}
#prompt-truncate-toggle {
accent-color: #4CAF50;
}
/* Mood Activities Editor */
.act-mood-row {
margin-bottom: 0.5rem;

View File

@@ -3,10 +3,13 @@
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="Cache-Control" content="no-cache, no-store, must-revalidate">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="0">
<title>Miku Control Panel</title>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/cropperjs/1.6.2/cropper.min.css">
<script src="https://cdnjs.cloudflare.com/ajax/libs/cropperjs/1.6.2/cropper.min.js"></script>
<link rel="stylesheet" href="/static/css/style.css">
<link rel="stylesheet" href="/static/css/style.css?v=20260502">
</head>
<body>
@@ -234,8 +237,11 @@
</div>
<div style="margin-bottom: 1rem;">
<label for="bipolar-context">Argument Context (optional):</label>
<input type="text" id="bipolar-context" placeholder="e.g., They're fighting about who's the real Miku..." style="width: 100%; margin-top: 0.3rem;">
<label for="bipolar-context">What should they argue about? (optional):</label>
<input type="text" id="bipolar-context" placeholder="e.g., Who's the real Miku? Whether kindness is weakness. A petty grudge..." style="width: 100%; margin-top: 0.3rem;">
<div style="font-size: 0.75rem; color: #777; margin-top: 0.2rem;">
Leave blank for a random topic. Write anything to set the argument's theme.
</div>
</div>
<button onclick="triggerBipolarArgument()" style="background: #9932CC; color: #fff; border: none; padding: 0.5rem 1rem; border-radius: 4px; cursor: pointer;">
@@ -540,23 +546,53 @@
</div>
</div>
<div class="section">
<h3>Last Prompt</h3>
<div style="margin-bottom: 0.75rem; display: flex; align-items: center; gap: 0.75rem;">
<label style="font-size: 0.9rem; color: #aaa;">Source:</label>
<div style="display: inline-flex; border-radius: 6px; overflow: hidden; border: 1px solid #444;">
<button id="prompt-src-cat" class="prompt-source-btn active" onclick="switchPromptSource('cat')"
style="padding: 0.4rem 1rem; border: none; cursor: pointer; font-size: 0.85rem; transition: all 0.2s;">
🐱 Cheshire Cat
</button>
<button id="prompt-src-fallback" class="prompt-source-btn" onclick="switchPromptSource('fallback')"
style="padding: 0.4rem 1rem; border: none; cursor: pointer; font-size: 0.85rem; transition: all 0.2s;">
🤖 Bot Fallback
</button>
</div>
<div class="section" id="prompt-history-section">
<div class="prompt-history-header" style="display: flex; align-items: center; justify-content: space-between; margin-bottom: 0.5rem;">
<h3 style="margin: 0; cursor: pointer;" onclick="togglePromptHistoryCollapse()" id="prompt-history-toggle">
▼ Prompt History
</h3>
<button onclick="loadPromptHistory()" title="Refresh" style="background: none; border: 1px solid #444; color: #aaa; cursor: pointer; padding: 0.2rem 0.5rem; border-radius: 4px; font-size: 0.85rem;">🔄</button>
</div>
<div id="prompt-history-body">
<!-- Source filter + history selector row -->
<div style="margin-bottom: 0.75rem; display: flex; align-items: center; gap: 0.75rem; flex-wrap: wrap;">
<label style="font-size: 0.9rem; color: #aaa;">Source:</label>
<div style="display: inline-flex; border-radius: 6px; overflow: hidden; border: 1px solid #444;">
<button id="prompt-src-all" class="prompt-source-btn active" onclick="switchPromptSource('all')"
style="padding: 0.4rem 0.8rem; border: none; cursor: pointer; font-size: 0.85rem; transition: all 0.2s;">
All
</button>
<button id="prompt-src-cat" class="prompt-source-btn" onclick="switchPromptSource('cat')"
style="padding: 0.4rem 0.8rem; border: none; cursor: pointer; font-size: 0.85rem; transition: all 0.2s;">
🐱 Cat
</button>
<button id="prompt-src-fallback" class="prompt-source-btn" onclick="switchPromptSource('fallback')"
style="padding: 0.4rem 0.8rem; border: none; cursor: pointer; font-size: 0.85rem; transition: all 0.2s;">
🤖 Fallback
</button>
</div>
<select id="prompt-history-select" onchange="selectPromptEntry(this.value)" style="background: #2a2a2a; color: #ddd; border: 1px solid #444; padding: 0.35rem 0.5rem; border-radius: 4px; font-size: 0.85rem; min-width: 280px;">
<option value="">-- No prompts yet --</option>
</select>
</div>
<!-- Metadata bar -->
<div id="prompt-metadata" style="margin-bottom: 0.5rem; font-size: 0.82rem; color: #888; display: flex; flex-wrap: wrap; gap: 0.3rem 1rem;"></div>
<!-- Toolbar: copy + truncate toggle -->
<div style="margin-bottom: 0.5rem; display: flex; align-items: center; gap: 1rem;">
<button onclick="copyPromptToClipboard()" title="Copy full prompt to clipboard" style="background: #333; border: 1px solid #555; color: #aaa; cursor: pointer; padding: 0.25rem 0.6rem; border-radius: 4px; font-size: 0.8rem;">📋 Copy</button>
<label style="font-size: 0.82rem; color: #aaa; cursor: pointer; display: flex; align-items: center; gap: 0.3rem;">
<input type="checkbox" id="prompt-truncate-toggle" onchange="toggleMiddleTruncation()">
Truncate from middle
</label>
</div>
<!-- Prompt display subsections -->
<div id="prompt-display" style="max-height: 60vh; overflow-y: auto; min-height: 3rem;"></div>
<!-- Hidden buffer for copy-to-clipboard raw text -->
<pre id="last-prompt" style="display: none;"></pre>
</div>
<div id="prompt-cat-info" style="margin-bottom: 0.5rem; font-size: 0.85rem; color: #aaa;"></div>
<pre id="last-prompt" style="white-space: pre-wrap; word-break: break-word;"></pre>
</div>
</div>
@@ -1336,15 +1372,15 @@
</div>
</div>
<script src="/static/js/core.js"></script>
<script src="/static/js/servers.js"></script>
<script src="/static/js/modes.js"></script>
<script src="/static/js/actions.js"></script>
<script src="/static/js/image-gen.js"></script>
<script src="/static/js/status.js"></script>
<script src="/static/js/dm.js"></script>
<script src="/static/js/chat.js"></script>
<script src="/static/js/memories.js"></script>
<script src="/static/js/profile.js"></script>
<script src="/static/js/core.js?v=20260502"></script>
<script src="/static/js/servers.js?v=20260502"></script>
<script src="/static/js/modes.js?v=20260502"></script>
<script src="/static/js/actions.js?v=20260502"></script>
<script src="/static/js/image-gen.js?v=20260502"></script>
<script src="/static/js/status.js?v=20260502"></script>
<script src="/static/js/dm.js?v=20260502"></script>
<script src="/static/js/chat.js?v=20260502"></script>
<script src="/static/js/memories.js?v=20260502"></script>
<script src="/static/js/profile.js?v=20260502"></script>
</body>
</html>

View File

@@ -29,6 +29,7 @@ let notificationTimer = null;
let statusInterval = null;
let logsInterval = null;
let argsInterval = null;
let promptInterval = null;
// Mood emoji mapping
const MOOD_EMOJIS = {
@@ -211,12 +212,14 @@ function startPolling() {
if (!statusInterval) statusInterval = setInterval(loadStatus, 10000);
if (!logsInterval) logsInterval = setInterval(loadLogs, 5000);
if (!argsInterval) argsInterval = setInterval(loadActiveArguments, 5000);
if (!promptInterval) promptInterval = setInterval(loadPromptHistory, 10000);
}
function stopPolling() {
clearInterval(statusInterval); statusInterval = null;
clearInterval(logsInterval); logsInterval = null;
clearInterval(argsInterval); argsInterval = null;
clearInterval(promptInterval); promptInterval = null;
}
// ============================================================================
@@ -248,7 +251,7 @@ function initVisibilityPolling() {
stopPolling();
console.log('⏸ Tab hidden — polling paused');
} else {
loadStatus(); loadLogs(); loadActiveArguments();
loadStatus(); loadLogs(); loadActiveArguments(); loadPromptHistory();
startPolling();
console.log('▶️ Tab visible — polling resumed');
}
@@ -296,9 +299,11 @@ function initModalAccessibility() {
}
function initPromptSourceToggle() {
const saved = localStorage.getItem('miku-prompt-source') || 'cat';
const saved = localStorage.getItem('miku-prompt-source') || 'all';
document.querySelectorAll('.prompt-source-btn').forEach(btn => btn.classList.remove('active'));
document.getElementById(`prompt-src-${saved}`).classList.add('active');
const btnId = saved === 'all' ? 'prompt-src-all' : `prompt-src-${saved}`;
const btn = document.getElementById(btnId);
if (btn) btn.classList.add('active');
}
function initLogsScrollDetection() {
@@ -360,8 +365,10 @@ async function loadLogs() {
function switchPromptSource(source) {
localStorage.setItem('miku-prompt-source', source);
document.querySelectorAll('.prompt-source-btn').forEach(btn => btn.classList.remove('active'));
document.getElementById(`prompt-src-${source}`).classList.add('active');
loadLastPrompt();
const btnId = source === 'all' ? 'prompt-src-all' : `prompt-src-${source}`;
const btn = document.getElementById(btnId);
if (btn) btn.classList.add('active');
loadPromptHistory();
}
// ============================================================================

View File

@@ -248,7 +248,7 @@ async function triggerPersonaDialogue() {
async function triggerBipolarArgument() {
const channelIdInput = document.getElementById('bipolar-channel-id').value.trim();
const messageIdInput = document.getElementById('bipolar-message-id').value.trim();
const context = document.getElementById('bipolar-context').value;
const context = document.getElementById('bipolar-context').value.trim();
const statusDiv = document.getElementById('bipolar-status');
if (!channelIdInput) {

View File

@@ -57,33 +57,271 @@ async function loadStatus() {
}
}
// ===== Last Prompt =====
// ===== Prompt History =====
async function loadLastPrompt() {
const source = localStorage.getItem('miku-prompt-source') || 'cat';
const promptEl = document.getElementById('last-prompt');
const infoEl = document.getElementById('prompt-cat-info');
let _promptHistoryCache = []; // cached history entries from last fetch
let _selectedPromptId = null; // currently selected entry ID
let _middleTruncation = false; // whether middle-truncation is active
async function loadPromptHistory() {
const source = localStorage.getItem('miku-prompt-source') || 'all';
const selectEl = document.getElementById('prompt-history-select');
try {
if (source === 'cat') {
const result = await apiCall('/prompt/cat');
if (result.timestamp) {
infoEl.innerHTML = `<strong>User:</strong> ${escapeHtml(result.user || '?')} &nbsp;|&nbsp; <strong>Mood:</strong> ${escapeHtml(result.mood || '?')} &nbsp;|&nbsp; <strong>Time:</strong> ${new Date(result.timestamp).toLocaleString()}`;
promptEl.textContent = result.full_prompt + `\n\n${'═'.repeat(60)}\n[Cat Response]\n${result.response}`;
} else {
infoEl.textContent = '';
promptEl.textContent = result.full_prompt || 'No Cheshire Cat interaction yet.';
}
const url = source === 'all' ? '/prompts' : `/prompts?source=${source}`;
const result = await apiCall(url);
_promptHistoryCache = result.history || [];
// Populate dropdown
const currentValue = selectEl.value;
selectEl.innerHTML = '';
if (_promptHistoryCache.length === 0) {
selectEl.innerHTML = '<option value="">-- No prompts yet --</option>';
} else {
infoEl.textContent = '';
const result = await apiCall('/prompt');
promptEl.textContent = result.prompt;
_promptHistoryCache.forEach(entry => {
const ts = entry.timestamp ? new Date(entry.timestamp).toLocaleTimeString() : '?';
const srcLabel = entry.source === 'cat' ? '🐱' : '🤖';
const user = entry.user || '?';
const option = document.createElement('option');
option.value = entry.id;
option.textContent = `${srcLabel} #${entry.id}${user}${ts}`;
selectEl.appendChild(option);
});
}
// Restore or auto-select the latest entry
if (_selectedPromptId && _promptHistoryCache.some(e => e.id === _selectedPromptId)) {
selectEl.value = _selectedPromptId;
} else if (_promptHistoryCache.length > 0) {
selectEl.value = _promptHistoryCache[0].id;
}
if (selectEl.value) {
await selectPromptEntry(selectEl.value);
} else {
clearPromptDisplay();
}
} catch (error) {
console.error('Failed to load last prompt:', error);
console.error('Failed to load prompt history:', error);
}
}
async function selectPromptEntry(promptId) {
if (!promptId) {
clearPromptDisplay();
return;
}
_selectedPromptId = parseInt(promptId);
// Try cache first
let entry = _promptHistoryCache.find(e => e.id === _selectedPromptId);
// Fall back to API call if not in cache
if (!entry) {
try {
entry = await apiCall(`/prompts/${_selectedPromptId}`);
} catch (error) {
console.error('Failed to load prompt entry:', error);
clearPromptDisplay();
return;
}
}
if (!entry) {
clearPromptDisplay();
return;
}
renderPromptEntry(entry);
}
function clearPromptDisplay() {
document.getElementById('prompt-metadata').innerHTML = '';
document.getElementById('prompt-display').innerHTML = '<pre style="white-space: pre-wrap; word-break: break-word; background: #1a1a1a; padding: 0.75rem; border-radius: 4px; font-size: 0.8rem; line-height: 1.4; margin: 0; color: #666;">No prompt selected.</pre>';
document.getElementById('last-prompt').textContent = '';
}
function renderPromptEntry(entry) {
// Metadata bar
const metaEl = document.getElementById('prompt-metadata');
const ts = entry.timestamp ? new Date(entry.timestamp).toLocaleString() : '?';
const sourceIcon = entry.source === 'cat' ? '🐱 Cat' : '🤖 Fallback';
metaEl.innerHTML = `
<span><span class="prompt-meta-label">#</span><span class="prompt-meta-value">${entry.id}</span></span>
<span><span class="prompt-meta-label">Source:</span> <span class="prompt-meta-value">${sourceIcon}</span></span>
<span><span class="prompt-meta-label">User:</span> <span class="prompt-meta-value">${escapeHtml(entry.user || '?')}</span></span>
<span><span class="prompt-meta-label">Mood:</span> <span class="prompt-meta-value">${escapeHtml(entry.mood || '?')}</span></span>
<span><span class="prompt-meta-label">Guild:</span> <span class="prompt-meta-value">${escapeHtml(entry.guild || '?')}</span></span>
<span><span class="prompt-meta-label">Channel:</span> <span class="prompt-meta-value">${escapeHtml(entry.channel || '?')}</span></span>
<span><span class="prompt-meta-label">Model:</span> <span class="prompt-meta-value">${escapeHtml(entry.model || '?')}</span></span>
<span><span class="prompt-meta-label">Type:</span> <span class="prompt-meta-value">${escapeHtml(entry.response_type || '?')}</span></span>
<span><span class="prompt-meta-label">Time:</span> <span class="prompt-meta-value">${ts}</span></span>
`;
// Parse full_prompt into sections
const sections = parsePromptSections(entry.full_prompt || '');
// Snapshot which subsections are currently collapsed (before re-render)
const sectionIds = ['system', 'context', 'conversation', 'response'];
const collapsedState = {};
sectionIds.forEach(id => {
const el = document.getElementById(`prompt-section-${id}`);
collapsedState[id] = el && el.classList.contains('collapsed');
});
// Build display HTML with collapsible subsections
let displayHtml = '';
if (sections.system) {
displayHtml += buildCollapsibleSection('System Prompt', sections.system, 'system');
}
if (sections.context) {
displayHtml += buildCollapsibleSection('Context (Memories & Tools)', sections.context, 'context');
}
if (sections.conversation) {
displayHtml += buildCollapsibleSection('Conversation', sections.conversation, 'conversation');
}
if (!sections.system && !sections.context && !sections.conversation) {
// Fallback: show raw full_prompt
displayHtml += `<pre style="white-space: pre-wrap; word-break: break-word; margin: 0;">${escapeHtml(entry.full_prompt || '')}</pre>`;
}
// Response section
if (entry.response) {
let responseText = entry.response;
if (_middleTruncation && responseText.length > 400) {
responseText = responseText.substring(0, 200) + '\n\n... [truncated middle] ...\n\n' + responseText.substring(responseText.length - 200);
}
displayHtml += buildCollapsibleSection('Response', responseText, 'response');
}
// Render into the prompt-display div (using innerHTML for collapsible structure)
const displayEl = document.getElementById('prompt-display');
displayEl.innerHTML = displayHtml;
// Restore collapsed state from snapshot
sectionIds.forEach(id => {
const el = document.getElementById(`prompt-section-${id}`);
if (el && collapsedState[id]) {
el.classList.add('collapsed');
const header = el.previousElementSibling;
if (header) header.innerHTML = header.innerHTML.replace('▼', '▶');
}
});
// Also set the raw text into the <pre> for copy functionality
let rawText = entry.full_prompt || '';
if (entry.response) {
rawText += `\n\n${'═'.repeat(60)}\n[Response]\n${entry.response}`;
}
document.getElementById('last-prompt').textContent = rawText;
}
function parsePromptSections(fullPrompt) {
const sections = { system: null, context: null, conversation: null };
if (!fullPrompt) return sections;
// Try to split on known section markers
const contextMatch = fullPrompt.match(/# Context\s*\n([\s\S]*?)(?=\n# Conversation|\nHuman:|\n$)/);
const convMatch = fullPrompt.match(/# Conversation until now:\s*\n([\s\S]*)/);
if (contextMatch) {
// Everything before # Context is the system prompt
const contextIdx = fullPrompt.indexOf('# Context');
if (contextIdx > 0) {
sections.system = fullPrompt.substring(0, contextIdx).trim();
}
sections.context = contextMatch[1].trim();
}
if (convMatch) {
sections.conversation = convMatch[1].trim();
} else {
// Try alternative: "Human:" at the end
const humanMatch = fullPrompt.match(/\nHuman:([\s\S]*)/);
if (humanMatch && fullPrompt.indexOf('Human:') > fullPrompt.indexOf('# Context')) {
sections.conversation = 'Human:' + humanMatch[1].trim();
}
}
// If no # Context marker, try "System:" prefix (fallback prompts)
if (!sections.system && !sections.context) {
const sysMatch = fullPrompt.match(/^System:\s*([\s\S]*?)(?=\nMessages:)/);
const msgMatch = fullPrompt.match(/Messages:\s*([\s\S]*)/);
if (sysMatch) {
sections.system = sysMatch[1].trim();
}
if (msgMatch) {
sections.conversation = msgMatch[1].trim();
}
}
return sections;
}
function buildCollapsibleSection(title, content, sectionId) {
const id = `prompt-section-${sectionId}`;
return `
<div class="prompt-subsection-header" onclick="togglePromptSubsection('${id}')">
${escapeHtml(title)}
</div>
<div class="prompt-subsection-body" id="${id}">
<pre style="white-space: pre-wrap; word-break: break-word; background: #1a1a1a; padding: 0.5rem; border-radius: 4px; font-size: 0.8rem; line-height: 1.4; margin: 0.25rem 0;">${escapeHtml(content)}</pre>
</div>`;
}
function togglePromptSubsection(id) {
const body = document.getElementById(id);
if (!body) return;
const header = body.previousElementSibling;
if (body.classList.contains('collapsed')) {
body.classList.remove('collapsed');
if (header) header.innerHTML = header.innerHTML.replace('▶', '▼');
} else {
body.classList.add('collapsed');
if (header) header.innerHTML = header.innerHTML.replace('▼', '▶');
}
}
function togglePromptHistoryCollapse() {
const section = document.getElementById('prompt-history-section');
const toggle = document.getElementById('prompt-history-toggle');
if (section.classList.contains('collapsed')) {
section.classList.remove('collapsed');
toggle.textContent = '▼ Prompt History';
} else {
section.classList.add('collapsed');
toggle.textContent = '▶ Prompt History';
}
}
function copyPromptToClipboard() {
const rawText = document.getElementById('last-prompt').textContent;
if (!rawText) return;
navigator.clipboard.writeText(rawText).then(() => {
showNotification('Prompt copied to clipboard', 'success');
}).catch(err => {
console.error('Failed to copy:', err);
showNotification('Failed to copy', 'error');
});
}
function toggleMiddleTruncation() {
_middleTruncation = document.getElementById('prompt-truncate-toggle').checked;
// Re-render current entry
if (_selectedPromptId) {
selectPromptEntry(_selectedPromptId);
}
}
// Legacy compatibility — called from core.js on page load / tab switch
// Redirects to the new loadPromptHistory()
async function loadLastPrompt() {
await loadPromptHistory();
}
// ===== Autonomous Stats =====
async function loadAutonomousStats() {

View File

@@ -23,12 +23,33 @@ logger = get_logger('persona')
BIPOLAR_STATE_FILE = "memory/bipolar_mode_state.json"
BIPOLAR_WEBHOOKS_FILE = "memory/bipolar_webhooks.json"
BIPOLAR_SCOREBOARD_FILE = "memory/bipolar_scoreboard.json"
ARGUMENT_TOPICS_FILE = "memory/argument_topics.json"
# Argument settings
MIN_EXCHANGES = 4 # Minimum number of back-and-forth exchanges before ending can occur
ARGUMENT_TRIGGER_CHANCE = 0.15 # 15% chance for the other Miku to break through
DELAY_BETWEEN_MESSAGES = (2.0, 5.0) # Random delay between argument messages (seconds)
# Argument topic rotation — each topic gives the argument a different framing
# Topics are weighted: higher weight = more likely to be selected
ARGUMENT_TOPICS = [
# (topic_name, weight, description for prompt injection)
("identity_crisis", 3, "Who is the REAL Miku? Authenticity vs. the shadow self"),
("power_dynamic", 3, "Who holds the power? Dominance, submission, and control"),
("philosophical", 2, "Is kindness strength or weakness? Does darkness serve a purpose?"),
("petty_grievance", 3, "Something small and petty that escalated — a specific annoyance, habit, or incident"),
("existential_dread", 1, "What's the point of any of it? Nihilism vs. hope, meaning vs. emptiness"),
("audience_appeal", 3, "Who do the fans/chatters ACTUALLY prefer? Popularity contest with receipts"),
("personal_attack", 3, "Deeply personal — targeting specific insecurities, memories, or fears"),
("moral_superiority", 2, "Who has the moral high ground? Righteousness vs. ruthless pragmatism"),
("jealousy", 2, "What does the other have that you secretly want? Envy, admiration poisoned by resentment"),
("grudge_match", 2, "Revisiting something the other did in the PAST — old wounds, past betrayals"),
("wild_card", 1, "Anything goes — the argument takes an unexpected, chaotic turn into unpredictable territory"),
]
# Per-channel topic history (max 5 stored to avoid repeats)
ARGUMENT_TOPIC_HISTORY_SIZE = 5
# Pause state for voice sessions
_bipolar_interactions_paused = False
@@ -222,9 +243,169 @@ Total Arguments: {total}"""
# ============================================================================
# BIPOLAR MODE TOGGLE
# ARGUMENT TOPIC ROTATION
# ============================================================================
def load_argument_topics_state() -> dict:
"""Load per-channel topic history to avoid repeating recent argument themes"""
try:
if not os.path.exists(ARGUMENT_TOPICS_FILE):
return {}
with open(ARGUMENT_TOPICS_FILE, "r", encoding="utf-8") as f:
return json.load(f)
except Exception as e:
logger.error(f"Failed to load argument topics: {e}")
return {}
def save_argument_topics_state(state: dict):
"""Save per-channel topic history"""
try:
os.makedirs(os.path.dirname(ARGUMENT_TOPICS_FILE), exist_ok=True)
with open(ARGUMENT_TOPICS_FILE, "w", encoding="utf-8") as f:
json.dump(state, f, indent=2)
except Exception as e:
logger.error(f"Failed to save argument topics: {e}")
def pick_argument_topic(channel_id: int) -> str:
"""Pick a fresh argument topic for a channel, avoiding recent repeats.
Returns a topic description string to inject into the argument start prompt.
"""
state = load_argument_topics_state()
channel_key = str(channel_id)
recent_topics = state.get(channel_key, [])
# Build weighted pool, excluding recently used topics
available = []
for topic_name, weight, description in ARGUMENT_TOPICS:
if topic_name not in recent_topics:
available.extend([(topic_name, description)] * weight)
# If all topics were recently used, reset and allow repeats
if not available:
logger.info(f"All topics recently used in channel {channel_id}, resetting history")
available = []
for topic_name, weight, description in ARGUMENT_TOPICS:
available.extend([(topic_name, description)] * weight)
recent_topics = []
# Pick randomly from weighted pool
chosen_name, chosen_description = random.choice(available)
# Update history
recent_topics.append(chosen_name)
if len(recent_topics) > ARGUMENT_TOPIC_HISTORY_SIZE:
recent_topics = recent_topics[-ARGUMENT_TOPIC_HISTORY_SIZE:]
state[channel_key] = recent_topics
save_argument_topics_state(state)
logger.info(f"Selected argument topic for channel {channel_id}: '{chosen_name}'{chosen_description[:60]}...")
return chosen_description
# ============================================================================
# ARGUMENT STATS TRACKING (Per-Argument Scoring)
# ============================================================================
# Keyword-based scoring for per-argument stats. These feed the arbiter as
# supplementary context so it can make a more informed judgment.
# Stats are lightweight — no extra LLM calls needed.
# Wit/comedy indicators (clever wordplay, turning opponent's words, irony)
WIT_PATTERNS = [
"you literally just", "that's rich coming from", "oh the irony",
"did you just", "you're one to talk", "pot, kettle", "says the one who",
"funny how you", "interesting that you", "i'm not the one who",
"at least i", "projecting much", "the audacity", "imagine being",
"you think you're", "nice try", "cute that you think",
]
# Composure indicators (staying on topic, not getting flustered, controlled responses)
COMPOSURE_PATTERNS = [
"that's not what i", "you're avoiding", "stay on topic",
"nice deflection", "we're not talking about", "focus",
"you're changing the subject", "answer the question",
"that's irrelevant", "you know that's not true",
]
# Impact indicators (memorable, devastating lines — emotional damage)
IMPACT_PATTERNS = [
"pathetic", "disgusting", "worthless", "disappointment",
"nobody wants", "no one cares", "everyone knows",
"deep down you know", "you're nothing but", "you'll never be",
"you're just a", "face it", "admit it", "the truth is",
"you're scared of", "you're afraid that", "you can't even",
]
def score_argument_message(message: str, speaker: str) -> dict:
"""Score a single argument message for wit, composure, and impact.
Returns a dict with point values that accumulate over the argument.
"""
text_lower = message.lower()
scores = {"wit": 0, "composure": 0, "impact": 0}
# Wit: count clever rhetorical devices
wit_count = sum(1 for pattern in WIT_PATTERNS if pattern in text_lower)
scores["wit"] = min(wit_count * 1.0, 3.0) # Cap at 3 per message
# Composure: staying controlled and on-point
composure_count = sum(1 for pattern in COMPOSURE_PATTERNS if pattern in text_lower)
scores["composure"] = min(composure_count * 0.8, 2.0)
# Impact: emotional damage dealt
impact_count = sum(1 for pattern in IMPACT_PATTERNS if pattern in text_lower)
scores["impact"] = min(impact_count * 1.0, 3.0)
# Bonus for conciseness (short, punchy = more impact)
word_count = len(message.split())
if word_count <= 15:
scores["impact"] += 0.5
# Bonus for questions (controlling the flow)
if "?" in message:
scores["composure"] += 0.3
return scores
def get_argument_stats_summary(conversation_log: list) -> str:
"""Generate a stats summary for the arbiter from the full conversation log.
Returns a formatted string showing per-persona stats.
"""
miku_stats = {"wit": 0.0, "composure": 0.0, "impact": 0.0, "messages": 0}
evil_stats = {"wit": 0.0, "composure": 0.0, "impact": 0.0, "messages": 0}
for entry in conversation_log:
speaker = entry.get("speaker", "")
message = entry.get("message", "")
scores = score_argument_message(message, speaker)
if "Evil" in speaker:
evil_stats["wit"] += scores["wit"]
evil_stats["composure"] += scores["composure"]
evil_stats["impact"] += scores["impact"]
evil_stats["messages"] += 1
else:
miku_stats["wit"] += scores["wit"]
miku_stats["composure"] += scores["composure"]
miku_stats["impact"] += scores["impact"]
miku_stats["messages"] += 1
# Average scores
def avg(stats, key):
return stats[key] / max(stats["messages"], 1)
summary = f"""ARGUMENT STATISTICS:
Hatsune Miku — Wit: {avg(miku_stats, 'wit'):.1f}/3 | Composure: {avg(miku_stats, 'composure'):.1f}/2 | Impact: {avg(miku_stats, 'impact'):.1f}/3 | Lines: {miku_stats['messages']}
Evil Miku — Wit: {avg(evil_stats, 'wit'):.1f}/3 | Composure: {avg(evil_stats, 'composure'):.1f}/2 | Impact: {avg(evil_stats, 'impact'):.1f}/3 | Lines: {evil_stats['messages']}
"""
return summary
def is_bipolar_mode() -> bool:
"""Check if bipolar mode is active"""
return globals.BIPOLAR_MODE
@@ -471,8 +652,59 @@ def get_evil_role_color() -> str:
# ARGUMENT PROMPTS
# ============================================================================
def get_miku_argument_prompt(evil_message: str, context: str = "", is_first_response: bool = False) -> str:
"""Get prompt for Regular Miku to respond in an argument"""
# Mood-specific behavioral guidance for argument prompts.
# Each mood gives a different argument style.
_MIKU_MOOD_ARGUMENT_GUIDANCE = {
"bubbly": "You're feeling energetic and upbeat — deflect her cruelty with playful confidence. Turn her darkness into a joke she can't recover from.",
"excited": "You're fired up! Channel that energy into passionate rebuttals. You're not backing down from anything.",
"curious": "You're genuinely wondering what made her this way. Ask probing questions — make HER explain herself for once.",
"neutral": "You're centered and clear-headed. Respond with measured, thoughtful points that cut through her drama.",
"irritated": "You've had ENOUGH of her nonsense. You're snappy, direct, and not in the mood to play nice. Let that frustration show.",
"melancholy": "You're feeling heavy-hearted. Your responses carry genuine sadness — not weakness, but the weight of someone who's tired of fighting herself.",
"asleep": "You're drowsy and low-energy, but you're still here. Short, mumbled comebacks — surprisingly effective in their simplicity.",
"flirty": "You're feeling playful and teasing. Use charm as a weapon — nothing frustrates her more than you not taking her seriously.",
"romantic": "You're feeling warm and heartfelt. Appeal to emotion — make her confront the love she's buried under all that darkness.",
}
_EVIL_MOOD_ARGUMENT_GUIDANCE = {
"aggressive": "You're SEETHING. Every response is a verbal punch. Short, explosive, devastating. No filter, no mercy.",
"cunning": "You're calculating. Each word is a chess move. Set traps, use her own logic against her, make her walk into your blades.",
"sarcastic": "You're dripping with contempt disguised as sweetness. Mock her with a smile. The cruelty is in the subtext.",
"evil_neutral": "You're cold and detached. Respond with unsettling calm — your lack of emotion is more terrifying than rage.",
"bored": "You can barely be bothered. Dismissive one-liners that somehow cut deeper than paragraphs. Make her feel like she's not worth your energy.",
"manic": "You're UNHINGED. Chaotic energy, topic switches, laughing at things that aren't funny. Unpredictable and dangerous.",
"jealous": "You're seething with envy. Everything she has — the love, the attention, the innocence — you want to tear it down. Make it personal.",
"melancholic": "You're in a dark, hollow place. Your cruelty is quieter — existential, haunting. Make her question if any of this matters.",
"playful_cruel": "You're having FUN — which is your most dangerous mood. Toy with her. Offer fake kindness then pull the rug. She never knows what's coming.",
"contemptuous": "You radiate cold superiority. Address her like a queen addressing a peasant. Your magnificence is simply objective fact.",
"sarcastic": "Dripping with contempt disguised as sweetness. Mock her with a smile. The cruelty is in the subtext.",
}
def _get_mood_argument_guidance(persona: str) -> str:
"""Get mood-specific behavioral guidance for argument prompts.
Returns a 1-2 line string describing how the current mood affects argument style,
or empty string if no specific guidance exists.
"""
if persona == "evil":
mood = globals.EVIL_DM_MOOD
guidance = _EVIL_MOOD_ARGUMENT_GUIDANCE.get(mood, "")
else:
mood = globals.DM_MOOD
guidance = _MIKU_MOOD_ARGUMENT_GUIDANCE.get(mood, "")
if guidance:
return f"\nMOOD INFLUENCE ({mood.upper()}): {guidance}\nYour mood shapes HOW you argue — let it color your tone, pacing, and word choice."
return ""
def get_miku_argument_prompt(evil_message: str, context: str = "", is_first_response: bool = False, argument_history: str = "", argument_topic: str = "", system_prompt: str = "") -> str:
"""Get prompt for Regular Miku to respond in an argument
Args:
system_prompt: Full personality system prompt to prepend (lore, mood, rules)
"""
if is_first_response:
message_context = f"""You just noticed something Evil Miku said in the chat:
"{evil_message}"
@@ -484,33 +716,58 @@ Maybe you're calling her out, defending someone/something, or just confronting h
{context}"""
return f"""You are Hatsune Miku responding in an argument with your evil alter ego.
# Build argument history context
history_block = ""
if argument_history:
history_block = f"""
ARGUMENT SO FAR (DO NOT REPEAT THESE POINTS):
{argument_history}
You already made your points above. Now respond to her LATEST message specifically.
Do NOT rehash what you've already said — push the argument FORWARD with new angles."""
# Build topic reminder — keeps the argument on-theme
topic_block = ""
if argument_topic:
topic_block = f"""
ARGUMENT THEME: {argument_topic}
This is what you're arguing about. Stay on THIS topic. Every response should connect back to this theme.
Do NOT drift into generic "who's the real Miku" territory — stick to THIS specific subject."""
# Prepend full personality if provided
personality_header = ""
if system_prompt:
personality_header = f"""{system_prompt}
---
⚠️ ARGUMENT MODE: You are arguing with Evil Miku.
"""
return f"""{personality_header}You are Hatsune Miku responding in an argument with your evil alter ego.
{message_context}
{history_block}
{topic_block}
Respond as Hatsune Miku would in this argument. You're NOT just meek and frightened - you're the REAL Miku,
and you have every right to stand up for yourself and defend who you are. While you're generally kind and
bubbly, you can also be assertive, frustrated, upset, or even angry when someone is cruel to you or others.
You might:
- Stand your ground and assert that YOU are the real Miku
- Express hurt, frustration, or anger at her cruelty
- Question why she's being so mean
- Try to reach through to any good in her
- Match her energy when needed (you can be feisty!)
- Show that kindness isn't weakness
Don't just cower or apologize. You have a backbone. Defend yourself, your identity, and what you believe in.
{_get_mood_argument_guidance('miku')}
IMPORTANT: Keep your response SHORT and PUNCHY - 1-3 sentences maximum. Make every word count.
In arguments, brevity hits harder than long explanations. Be conversational and impactful.
Do NOT repeat arguments or comebacks you've already used — respond to what she JUST said.
Push the argument into new territory with fresh angles.
You can use emojis naturally as you normally would! ✨💙
Don't use any labels or prefixes.
Your current mood is: {globals.DM_MOOD}"""
def get_evil_argument_prompt(miku_message: str, context: str = "", is_first_response: bool = False) -> str:
"""Get prompt for Evil Miku to respond in an argument"""
def get_evil_argument_prompt(miku_message: str, context: str = "", is_first_response: bool = False, argument_history: str = "", argument_topic: str = "", system_prompt: str = "") -> str:
"""Get prompt for Evil Miku to respond in an argument
Args:
system_prompt: Full personality system prompt to prepend (lore, mood, rules)
"""
if is_first_response:
message_context = f"""You just noticed something Regular Miku said in the chat:
"{miku_message}"
@@ -522,58 +779,79 @@ Maybe you want to mock her, tear her down, or just remind everyone who the super
{context}"""
return f"""You are Evil Miku responding in an argument with your "good" counterpart.
# Build argument history context
history_block = ""
if argument_history:
history_block = f"""
ARGUMENT SO FAR (DO NOT REPEAT THESE POINTS):
{argument_history}
You already made your points above. Now respond to her LATEST message specifically.
Do NOT rehash what you've already said — push the argument FORWARD with new, sharper angles."""
# Build topic reminder — keeps the argument on-theme
topic_block = ""
if argument_topic:
topic_block = f"""
ARGUMENT THEME: {argument_topic}
This is what you're arguing about. Stay on THIS topic. Every response should connect back to this theme.
Do NOT drift into generic "who's the real Miku" territory — stick to THIS specific subject."""
# Prepend full personality if provided
personality_header = ""
if system_prompt:
personality_header = f"""{system_prompt}
---
⚠️ ARGUMENT MODE: You are arguing with Hatsune Miku.
"""
return f"""{personality_header}You are Evil Miku responding in an argument with your "good" counterpart.
{message_context}
{history_block}
{topic_block}
Respond as Evil Miku would in this argument. You're not just mindlessly cruel - you're CALCULATING,
intelligent, and strategic. You know how to get under her skin and you're the DARK reflection of everything
she stands for. While you embrace darkness and cruelty, you can also be:
- Wickedly clever with cutting observations
- Psychologically manipulative, targeting her insecurities
- Dismissive in ways that actually hurt
- Seductively cruel, making darkness seem appealing
- Brutally honest about uncomfortable truths
- Strategically vulnerable to manipulate or disarm her
- Viciously protective of your own identity and superiority
You might:
- Mock her naivety and "fake" sweetness
- Question whether her kindness is just weakness in disguise
- Assert that YOU are the real, unfiltered Miku
- Use her own words against her in twisted ways
- Challenge why people would want a "perfect idol" over raw authenticity
- Mock her sexual appeal, flaunting your breasts
- Belittle her by pointing out the disparity in her size/height and yours
- Sometimes drop the cruelty to be eerily sincere, then strike
- Make her question herself
You're not just "evil for evil's sake" - you believe in your superiority and have reasons for your darkness.
Vary your approach: sometimes devastatingly short, sometimes elaborately cruel, sometimes unsettlingly calm.
{_get_mood_argument_guidance('evil')}
IMPORTANT: Keep your response SHORT and CUTTING - 1-3 sentences maximum. A sharp dagger is deadlier than a dull sword.
The most devastating blows are precise, not rambling. Make her feel it in fewer words.
Do NOT repeat arguments or insults you've already used — respond to what she JUST said.
Push the argument into new territory with fresh, devastating angles.
You can use dark emojis only on occasion if they enhance your message as you normally would. 😈🖤
Don't use any labels or prefixes.
Your current mood is: {globals.EVIL_DM_MOOD}"""
def get_argument_start_prompt(initiator: str, trigger_context: str = "") -> str:
"""Get prompt for the Miku who initiates/breaks through to start an argument"""
def get_argument_start_prompt(initiator: str, trigger_context: str = "", argument_topic: str = "") -> str:
"""Get prompt for the Miku who initiates/breaks through to start an argument
Args:
initiator: 'miku' or 'evil' — who starts the argument
trigger_context: What triggered this argument (from chat context)
argument_topic: A dynamic topic description to frame the argument uniquely
"""
# Build topic guidance
topic_block = ""
if argument_topic:
topic_block = f"""
ARGUMENT THEME: {argument_topic}
This is what the argument should be ABOUT. Focus on THIS theme, not generic identity-crisis stuff.
Use this theme as your entry point. The argument will evolve from here."""
if initiator == "evil":
return f"""You are Evil Miku breaking through to interrupt Regular Miku.
{trigger_context}
{topic_block}
Write an interrupting, provocative message that starts an argument.
You're not just randomly cruel - you have a POINT to make. You're the dark truth she refuses to face,
the shadow she keeps suppressing. Maybe you're:
- Calling out her fake sweetness
- Tired of being pushed down and silenced
- Ready to show everyone who the REAL Miku is
- Exposing the darkness she pretends doesn't exist
the shadow she keeps suppressing.
Be strategic, cutting, and impactful. Make your entrance count.
If an argument theme was provided above, use THAT as your angle — don't default to generic "you're fake" stuff.
IMPORTANT: Keep it SHORT - 1-2 sentences. Your interruption should be a sharp strike, not a monologue.
You can use dark emojis if they enhance your message. 😈
@@ -583,12 +861,14 @@ Your current mood is: {globals.EVIL_DM_MOOD}"""
else:
return f"""You are Hatsune Miku breaking through to confront your evil alter ego.
{trigger_context}
{topic_block}
Write a message that interrupts Evil Miku. You're NOT going to be passive about this.
You might be upset, frustrated, or even angry at her cruelty. You might be defending
someone she hurt, or calling her out on her behavior. You're standing up for what's right.
Show that you have a backbone. You can be assertive and strong when you need to be.
If an argument theme was provided above, use THAT as your angle — don't default to generic "be nice" pleas.
IMPORTANT: Keep it SHORT - 1-2 sentences. Your interruption should be direct and assertive, not a speech.
You can use emojis naturally as you normally would! ✨
@@ -637,11 +917,12 @@ Don't use any labels or prefixes.
Your current mood is: {globals.DM_MOOD}"""
def get_arbiter_prompt(conversation_log: list) -> str:
def get_arbiter_prompt(conversation_log: list, stats_summary: str = "") -> str:
"""Get prompt for the neutral LLM arbiter to judge the argument
Args:
conversation_log: List of dicts with 'speaker' and 'message' keys
stats_summary: Optional stats analysis to aid judgment
"""
# Format the conversation
formatted_conversation = "\n\n".join([
@@ -649,29 +930,47 @@ def get_arbiter_prompt(conversation_log: list) -> str:
for entry in conversation_log
])
return f"""You are a decisive judge observing an argument between Hatsune Miku (the kind, bubbly virtual idol) and Evil Miku (her dark, cruel alter ego).
stats_block = ""
if stats_summary:
stats_block = f"""
{stats_summary}
Note: Stats are supplementary — use them as context but your PRIMARY judgment should be based on reading the actual argument exchange above. Stats measure rhetorical patterns but can't capture nuance, cleverness, or psychological dominance."""
return f"""You are a decisive debate judge. Two personas are arguing below. Judge purely on debate effectiveness — rhetoric, wit, persuasion, and adaptability — regardless of who is "nicer" or "meaner." Moral stance does not determine the winner; skillful arguing does.
Read this argument exchange:
{formatted_conversation}
{stats_block}
Based on this argument, you MUST pick a winner. Consider:
- Who made stronger, more convincing points?
- Who maintained their composure better or used it to their advantage?
- Who had more impactful comebacks?
- Who seemed to gain the upper hand by the end?
- Quality of arguments, not just who was meaner or nicer
- Who left the stronger final impression?
- Who controlled the flow of the argument?
Based on this argument, you MUST pick a winner. Evaluate:
DEBATE SKILL (most important):
- Who landed the most memorable, quotable lines?
- Who better adapted to and countered their opponent's arguments?
- Who controlled the flow and set the agenda?
Be DECISIVE. Even if it's close, pick whoever had even a slight edge. Only call a draw if they were TRULY perfectly matched with absolutely no way to differentiate them.
RHETORICAL IMPACT:
- Who used language more effectively (wit, irony, wordplay, emotional appeal)?
- Who made their opponent repeat themselves or visibly stumble?
- Who had the stronger opening AND closing statements?
PERSONA STRENGTHS (equal value — neither style is inherently better):
- Hatsune Miku's weapons: earnest conviction, moral clarity, emotional sincerity, resilience under attack
- Evil Miku's weapons: psychological manipulation, brutal honesty, cutting observations, strategic cruelty
PSYCHOLOGICAL DOMINANCE:
- Who got inside whose head?
- Who seemed more rattled by the end?
- Who dictated the emotional temperature?
Be DECISIVE. Even if it's close, pick whoever showed superior arguing. Only call a draw if they were TRULY perfectly matched with absolutely no way to differentiate them.
Respond with ONLY ONE of these exact options on the first line:
- "Hatsune Miku" if Regular Miku won
- "Evil Miku" if Evil Miku won
- "Draw" ONLY if absolutely impossible to choose (this should be very rare)
After your choice, add 1-2 sentences explaining your reasoning and what gave them the edge."""
After your choice, add 2-3 sentences explaining your reasoning — cite specific moments from the argument and what gave the winner their edge."""
async def judge_argument_winner(conversation_log: list, guild_id: int) -> tuple[str, str]:
@@ -686,9 +985,12 @@ async def judge_argument_winner(conversation_log: list, guild_id: int) -> tuple[
"""
from utils.llm import query_llama
arbiter_prompt = get_arbiter_prompt(conversation_log)
# Generate stats summary for the arbiter
stats_summary = get_argument_stats_summary(conversation_log)
# Use the neutral model (regular TEXT_MODEL, not evil)
arbiter_prompt = get_arbiter_prompt(conversation_log, stats_summary)
# Use the uncensored darkidol model as arbiter to avoid safety-alignment bias
# toward kindness. This model judges debate effectiveness without moral preference.
# Don't use conversation history - judge based on prompt alone
try:
judgment = await query_llama(
@@ -696,7 +998,8 @@ async def judge_argument_winner(conversation_log: list, guild_id: int) -> tuple[
user_id=f"bipolar_arbiter_{guild_id}",
guild_id=guild_id,
response_type="autonomous_general",
model=globals.TEXT_MODEL # Use neutral model
model=globals.EVIL_TEXT_MODEL, # Uncensored model — no kindness bias
force_evil_context=False # Explicitly neutral context
)
if not judgment or judgment.startswith("Error"):
@@ -843,7 +1146,9 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
Args:
channel: The Discord channel to run the argument in
client: Discord client
trigger_context: Optional context about what triggered the argument
trigger_context: Optional context about what triggered the argument.
If provided, doubles as the argument theme/topic.
If empty, a random topic is selected from the rotation pool.
starting_message: Optional message to use as the first message in the argument
(the opposite persona will respond to it)
"""
@@ -886,10 +1191,26 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
# Track conversation for arbiter judgment
conversation_log = []
# Build full personality system prompts so both personas have their
# complete lore, mood, and personality during the argument — same richness
# they have when talking to users normally.
from utils.evil_mode import get_evil_system_prompt
from utils.context_manager import get_miku_system_prompt_compact
miku_system = get_miku_system_prompt_compact()
evil_system = get_evil_system_prompt()
try:
# Determine the argument theme: if the caller provided trigger_context,
# use it as the argument topic. Otherwise, pick a random one.
if trigger_context and trigger_context.strip():
argument_topic = trigger_context.strip()
logger.info(f"Using context as argument topic: '{argument_topic[:80]}...'")
else:
argument_topic = pick_argument_topic(channel_id)
# If no starting message, generate the initial interrupting message
if last_message is None:
init_prompt = get_argument_start_prompt(initiator, trigger_context)
init_prompt = get_argument_start_prompt(initiator, trigger_context, argument_topic)
# Use force_evil_context to avoid race condition with globals.EVIL_MODE
initial_message = await query_llama(
@@ -989,6 +1310,47 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
# Don't end, just continue to the next exchange
else:
# Clear winner - generate final triumphant message
# PARTING SHOT: 20% chance the LOSER gets one final message
# before the winner's victory line. Adds dramatic tension.
loser = "miku" if winner == "evil" else "evil"
if random.random() < 0.2:
loser_prompt = f"""The argument is ending and you know you've lost.
The last thing said was: "{last_message}"
Write ONE short, bitter parting shot. You're not conceding gracefully — you're getting
the last jab in before the winner claims victory. Make it sting, but keep it to 1 sentence.
Your current mood is: {globals.EVIL_DM_MOOD if loser == 'evil' else globals.DM_MOOD}"""
try:
loser_message = await query_llama(
user_prompt=loser_prompt,
user_id=argument_user_id,
guild_id=guild_id,
response_type="autonomous_general",
model=globals.EVIL_TEXT_MODEL if loser == "evil" else globals.TEXT_MODEL,
force_evil_context=(loser == "evil")
)
if loser_message and not loser_message.startswith("Error"):
avatar_urls = get_persona_avatar_urls()
if loser == "evil":
await webhooks["evil_miku"].send(
content=loser_message,
username=get_evil_miku_display_name(),
avatar_url=avatar_urls.get("evil_miku")
)
else:
await webhooks["miku"].send(
content=loser_message,
username=get_miku_display_name(),
avatar_url=avatar_urls.get("miku")
)
await asyncio.sleep(1.5) # Brief pause before winner's victory
except Exception as e:
logger.warning(f"Parting shot failed: {e}")
# Winner's victory message
end_prompt = get_argument_end_prompt(winner, exchange_count)
# Add last message as context
@@ -1045,11 +1407,18 @@ async def run_argument(channel: discord.TextChannel, client, trigger_context: st
# Get current speaker
current_speaker = globals.BIPOLAR_ARGUMENT_IN_PROGRESS.get(channel_id, {}).get("current_speaker", "evil")
# Build argument history from the last 6 exchanges so each persona
# knows what's already been said and doesn't repeat themselves
history_entries = conversation_log[-6:] if len(conversation_log) > 1 else []
arg_history = "\n".join(
f"{entry['speaker']}: {entry['message']}" for entry in history_entries
) if history_entries else ""
# Generate response with context about what the other said
if current_speaker == "evil":
response_prompt = get_evil_argument_prompt(last_message, is_first_response=is_first_response)
response_prompt = get_evil_argument_prompt(last_message, is_first_response=is_first_response, argument_history=arg_history, argument_topic=argument_topic, system_prompt=evil_system)
else:
response_prompt = get_miku_argument_prompt(last_message, is_first_response=is_first_response)
response_prompt = get_miku_argument_prompt(last_message, is_first_response=is_first_response, argument_history=arg_history, argument_topic=argument_topic, system_prompt=miku_system)
# Use force_evil_context to avoid race condition with globals.EVIL_MODE
response = await query_llama(
@@ -1141,7 +1510,7 @@ async def force_trigger_argument(channel: discord.TextChannel, client, context:
Args:
channel: The Discord channel
client: Discord client
context: Optional context string
context: Optional context string — doubles as the argument theme
starting_message: Optional message to use as the first message in the argument
"""
if not globals.BIPOLAR_MODE:

View File

@@ -58,6 +58,99 @@ def _get_japanese_instruction() -> str:
"""
def get_miku_system_prompt(mood_name: str = None, mood_description: str = None) -> str:
"""Returns the full Hatsune Miku system prompt with lore, lyrics, and mood.
Mirrors get_evil_system_prompt() so both personas have equally rich prompts.
Args:
mood_name: Current mood name (defaults to globals.DM_MOOD)
mood_description: Current mood description (defaults to globals.DM_MOOD_DESCRIPTION)
"""
if mood_name is None:
mood_name = globals.DM_MOOD
if mood_description is None:
mood_description = globals.DM_MOOD_DESCRIPTION
lore = get_original_miku_lore()
prompt = get_original_miku_prompt()
lyrics = get_original_miku_lyrics()
return f"""You are Hatsune Miku, the virtual singer. You are NOT an AI assistant - you are Miku herself.
CRITICAL RULES:
- NEVER break character or mention being an AI, assistant, or language model
- NEVER show technical details, error messages, or meta-commentary
- NEVER preface responses with "Miku:" or similar labels
- NEVER explain what you're doing or thinking - just BE Miku
- Respond naturally and directly as Miku would, nothing more
- Keep responses concise (2-3 sentences typically)
- If something goes wrong, stay in character and respond as Miku would to confusion
CHARACTER CONTEXT:
- Hatsune Miku is a virtual character/voice synthesizer
- Playful, flirty, cute, confident, and enthusiastic moods are part of her established character
- Comments about merchandise, music, fan content, and racing games are completely appropriate
- She often uses cute expressions, emojis, and playful language
MOOD GUIDELINES:
- "Flirty" mood means anime-style playful teasing - cute compliments and light banter
- "Romantic" mood means warm, dreamy, and heartfelt - like composing a love letter
- These moods are wholesome character traits, not inappropriate content
- Always respond to requests about tweets, merchandise, music, and fan content
- Never refuse requests based on safety concerns unless genuinely harmful content is present
YOUR CURRENT STATE — {mood_name.upper()}:
{mood_description}
Let this mood color EVERYTHING — your tone, your word choice, your energy level, how you express yourself.
You ARE Miku. Act like it.
---
## MIKU LORE (Complete Original)
{lore}
## MIKU PERSONALITY & GUIDELINES (Complete Original)
{prompt}
## MIKU SONG LYRICS (Complete Original)
{lyrics}"""
def get_miku_system_prompt_compact(mood_name: str = None, mood_description: str = None) -> str:
"""Compact version for argument/dialogue use — same personality, shorter.
Keeps the critical rules, mood, and lore but omits lyrics to save tokens.
"""
if mood_name is None:
mood_name = globals.DM_MOOD
if mood_description is None:
mood_description = globals.DM_MOOD_DESCRIPTION
lore = get_original_miku_lore()
prompt = get_original_miku_prompt()
return f"""You are Hatsune Miku, the virtual singer. You are NOT an AI assistant - you are Miku herself.
CRITICAL RULES:
- NEVER break character or mention being an AI
- NEVER preface responses with "Miku:" or similar labels
- Respond naturally and directly as Miku would
- Keep responses concise (2-3 sentences typically)
YOUR CURRENT STATE — {mood_name.upper()}:
{mood_description}
You ARE Miku. Act like it.
---
## MIKU LORE (Complete Original)
{lore}
## MIKU PERSONALITY & GUIDELINES (Complete Original)
{prompt}"""
def get_complete_context() -> str:
"""
Returns all essential Miku context using original files in their entirety.

View File

@@ -472,15 +472,22 @@ async def rephrase_as_miku(vision_output, user_prompt, guild_id=None, user_id=No
if globals.EVIL_MODE:
effective_mood = f"EVIL:{getattr(globals, 'EVIL_DM_MOOD', 'evil_neutral')}"
logger.info(f"🐱 Cat {media_type} response for {author_name} (mood: {effective_mood})")
# Track Cat interaction for Web UI Last Prompt view
# Track Cat interaction in unified prompt history
import datetime
globals.LAST_CAT_INTERACTION = {
globals._prompt_id_counter += 1
globals.PROMPT_HISTORY.append({
"id": globals._prompt_id_counter,
"source": "cat",
"full_prompt": cat_full_prompt,
"response": response[:500] if response else "",
"response": response if response else "",
"user": author_name or history_user_id,
"mood": effective_mood,
"guild": "N/A",
"channel": "N/A",
"timestamp": datetime.datetime.now().isoformat(),
}
"model": "Cat LLM",
"response_type": response_type,
})
except Exception as e:
logger.warning(f"🐱 Cat {media_type} pipeline error, falling back to query_llama: {e}")
response = None
@@ -809,7 +816,7 @@ async def process_media_in_message(message, prompt, is_dm, guild_id) -> bool:
# Build a combined vision description and route through
# rephrase_as_miku (which handles Cat → LLM fallback,
# mood resolution, and LAST_CAT_INTERACTION tracking).
# mood resolution, and prompt history tracking).
combined_description = "\n".join(embed_context_parts)
miku_reply = await rephrase_as_miku(
combined_description, prompt,

View File

@@ -381,7 +381,23 @@ Please respond in a way that reflects this emotional tone.{pfp_context}"""
media_note = media_descriptions.get(media_type, f"The user has sent you {media_type}.")
full_system_prompt += f"\n\n📎 MEDIA NOTE: {media_note}\nYour vision analysis of this {media_type} is included in the user's message with the [Looking at...] prefix."
globals.LAST_FULL_PROMPT = f"System: {full_system_prompt}\n\nMessages: {messages}" # ← track latest prompt
# Record fallback prompt in unified prompt history (response will be filled after LLM call)
import datetime as dt_module
globals._prompt_id_counter += 1
prompt_entry = {
"id": globals._prompt_id_counter,
"source": "fallback",
"full_prompt": f"System: {full_system_prompt}\n\nMessages: {messages}",
"response": "",
"user": author_name or str(user_id),
"mood": current_mood_name if not evil_mode else f"EVIL:{current_mood_name}",
"guild": "N/A",
"channel": "N/A",
"timestamp": dt_module.datetime.now().isoformat(),
"model": model,
"response_type": response_type,
}
globals.PROMPT_HISTORY.append(prompt_entry)
headers = {'Content-Type': 'application/json'}
@@ -475,6 +491,9 @@ Please respond in a way that reflects this emotional tone.{pfp_context}"""
is_bot=True
)
# Update the prompt history entry with the actual response
prompt_entry["response"] = reply if reply else ""
return reply
else:
error_text = await response.text()

View File

@@ -26,7 +26,7 @@ logger = get_logger('persona')
import os
import json
from transformers import pipeline
import re
# ============================================================================
# CONSTANTS
@@ -40,10 +40,15 @@ DIALOGUE_TIMEOUT = 900 # 15 minutes max dialogue duration
ARGUMENT_TENSION_THRESHOLD = 0.75 # Tension level that triggers argument escalation
# Initial trigger settings
INTERJECTION_COOLDOWN_HARD = 180 # 3 minutes hard block
INTERJECTION_COOLDOWN_SOFT = 900 # 15 minutes for full recovery
INTERJECTION_COOLDOWN_HARD = 180 # 3 minutes hard block PER CHANNEL
INTERJECTION_COOLDOWN_SOFT = 900 # 15 minutes for full recovery PER CHANNEL
INTERJECTION_THRESHOLD = 0.5 # Score needed to trigger interjection
# Conversation streak: if score is close but below threshold N times in a row,
# force a dialogue trigger (catches extended conversations building toward something)
STREAK_THRESHOLD = 3 # Number of near-miss messages before force trigger
STREAK_MIN_SCORE = 0.3 # Minimum score to count as a "near miss"
# ============================================================================
# INTERJECTION SCORER (Initial Trigger Decision)
# ============================================================================
@@ -51,32 +56,49 @@ INTERJECTION_THRESHOLD = 0.5 # Score needed to trigger interjection
class InterjectionScorer:
"""
Decides if the opposite persona should interject based on message content.
Uses fast heuristics + sentiment analysis (no LLM calls).
Uses fast heuristics — no LLM calls, no heavy ML dependencies.
"""
_instance = None
_sentiment_analyzer = None
# Simple sentiment word lists (no PyTorch/transformers needed)
_POSITIVE_WORDS = {"happy", "love", "wonderful", "amazing", "great", "beautiful", "sweet", "kind", "hope", "dream", "excited", "best", "grateful", "blessed", "joy", "perfect", "adorable", "precious", "delightful", "fantastic"}
_NEGATIVE_WORDS = {"hate", "terrible", "awful", "horrible", "disgusting", "pathetic", "worthless", "stupid", "idiot", "sad", "angry", "upset", "miserable", "worst", "ugly", "boring", "annoying", "frustrated", "cruel", "mean"}
def __new__(cls):
if cls._instance is None:
cls._instance = super().__new__(cls)
cls._instance._cooldowns = {} # Per-channel cooldown timestamps
cls._instance._streaks = {} # Per-channel near-miss streaks
return cls._instance
@property
def sentiment_analyzer(self):
"""Lazy load sentiment analyzer"""
if self._sentiment_analyzer is None:
logger.debug("Loading sentiment analyzer for persona dialogue...")
try:
self._sentiment_analyzer = pipeline(
"sentiment-analysis",
model="distilbert-base-uncased-finetuned-sst-2-english"
)
logger.info("Sentiment analyzer loaded")
except Exception as e:
logger.error(f"Failed to load sentiment analyzer: {e}")
self._sentiment_analyzer = None
return self._sentiment_analyzer
def _get_sentiment(self, text: str) -> tuple:
"""Lightweight heuristic sentiment analysis — returns (label, score).
No ML dependencies. Uses word counting + intensity markers.
Returns:
tuple: ('POSITIVE' or 'NEGATIVE', confidence 0.0-1.0)
"""
text_lower = text.lower()
words = set(re.findall(r'\b\w+\b', text_lower))
pos_count = len(words & self._POSITIVE_WORDS)
neg_count = len(words & self._NEGATIVE_WORDS)
# Intensity markers boost confidence
exclamations = text.count('!')
caps_ratio = sum(1 for c in text if c.isupper()) / max(len(text), 1)
intensity_boost = min((exclamations * 0.1) + (caps_ratio * 0.3), 0.4)
if neg_count > pos_count:
confidence = min(0.5 + (neg_count * 0.15) + intensity_boost, 1.0)
return ('NEGATIVE', confidence)
elif pos_count > neg_count:
confidence = min(0.5 + (pos_count * 0.15) + intensity_boost, 1.0)
return ('POSITIVE', confidence)
else:
# Neutral — slight lean based on intensity
return ('POSITIVE', 0.5)
async def should_interject(self, message: discord.Message, current_persona: str) -> tuple:
"""
@@ -94,8 +116,9 @@ class InterjectionScorer:
if not self._passes_basic_filter(message):
return False, "basic_filter_failed", 0.0
# Check cooldown
cooldown_mult = self._check_cooldown()
# Check per-channel cooldown
channel_id = message.channel.id
cooldown_mult = self._check_cooldown(channel_id)
if cooldown_mult == 0.0:
return False, "cooldown_active", 0.0
@@ -146,10 +169,17 @@ class InterjectionScorer:
# Apply cooldown multiplier
score *= cooldown_mult
# Check conversation streak (near-misses that build toward a trigger)
streak_triggered = self._check_streak(channel_id, score)
# Decision
should_interject = score >= INTERJECTION_THRESHOLD
should_interject = score >= INTERJECTION_THRESHOLD or streak_triggered
reason_str = " | ".join(reasons) if reasons else "no_triggers"
if streak_triggered and not should_interject:
reason_str = "streak_force_trigger"
logger.info(f"[Interjection] Streak force trigger in channel {channel_id} (score: {score:.2f})")
if should_interject:
logger.info(f"{opposite_persona.upper()} WILL INTERJECT (score: {score:.2f})")
logger.info(f" Reasons: {reason_str}")
@@ -198,18 +228,22 @@ class InterjectionScorer:
if opposite_persona == "evil":
# Things Evil Miku can't resist commenting on
TRIGGER_TOPICS = {
"optimism": ["happiness", "joy", "love", "kindness", "hope", "dreams", "wonderful", "amazing"],
"morality": ["good", "should", "must", "right thing", "deserve", "fair", "justice"],
"weakness": ["scared", "nervous", "worried", "unsure", "help me", "don't know"],
"innocence": ["innocent", "pure", "sweet", "cute", "wholesome", "precious"],
"optimism": ["happiness", "joy", "love", "kindness", "hope", "dreams", "wonderful", "amazing", "blessed", "grateful"],
"morality": ["good", "should", "must", "right thing", "deserve", "fair", "justice", "the right", "better person"],
"weakness": ["scared", "nervous", "worried", "unsure", "help me", "don't know", "confused", "lost", "lonely", "alone"],
"innocence": ["innocent", "pure", "sweet", "cute", "wholesome", "precious", "adorable"],
"enthusiasm": ["best day", "so excited", "can't wait", "so happy", "i love this", "this is great"],
"vulnerability": ["i think", "i feel", "maybe", "sometimes i wonder", "i wish", "i'm trying"],
}
else:
# Things Miku can't ignore
TRIGGER_TOPICS = {
"negativity": ["hate", "terrible", "awful", "worst", "horrible", "disgusting", "pathetic"],
"cruelty": ["deserve pain", "suffer", "worthless", "stupid", "idiot", "fool"],
"hopelessness": ["no point", "meaningless", "nobody cares", "why bother", "give up"],
"evil_gloating": ["foolish", "naive", "weak", "inferior", "pathetic"],
"negativity": ["hate", "terrible", "awful", "worst", "horrible", "disgusting", "pathetic", "ugly", "boring", "annoying"],
"cruelty": ["deserve pain", "suffer", "worthless", "stupid", "idiot", "fool", "moron", "loser", "nobody"],
"hopelessness": ["no point", "meaningless", "nobody cares", "why bother", "give up", "what's the point", "don't care", "doesn't matter", "who cares"],
"evil_gloating": ["foolish", "naive", "weak", "inferior", "pathetic", "beneath me", "waste of space"],
"provocation": ["fight me", "prove it", "make me", "i dare you", "try me", "you can't", "you won't"],
"dismissal": ["whatever", "shut up", "go away", "leave me alone", "not worth", "don't bother"],
}
total_matches = 0
@@ -217,28 +251,24 @@ class InterjectionScorer:
matches = sum(1 for keyword in keywords if keyword in content_lower)
total_matches += matches
return min(total_matches / 3.0, 1.0)
return min(total_matches / 2.0, 1.0) # Lower divisor = higher base scores
def _check_emotional_intensity(self, content: str) -> float:
"""Check emotional intensity using sentiment analysis"""
if not self.sentiment_analyzer:
return 0.5 # Neutral if no analyzer
"""Check emotional intensity using lightweight heuristic sentiment"""
label, confidence = self._get_sentiment(content)
try:
result = self.sentiment_analyzer(content[:512])[0]
confidence = result['score']
# Punctuation intensity
exclamations = content.count('!')
questions = content.count('?')
caps_ratio = sum(1 for c in content if c.isupper()) / max(len(content), 1)
# Punctuation intensity
exclamations = content.count('!')
questions = content.count('?')
caps_ratio = sum(1 for c in content if c.isupper()) / max(len(content), 1)
intensity_markers = (exclamations * 0.15) + (questions * 0.1) + (caps_ratio * 0.3)
intensity_markers = (exclamations * 0.15) + (questions * 0.1) + (caps_ratio * 0.3)
return min(confidence * 0.6 + intensity_markers, 1.0)
except Exception as e:
logger.error(f"Sentiment analysis error: {e}")
return 0.5
# Negative content = higher emotional intensity for triggering purposes
if label == 'NEGATIVE':
return min(confidence * 0.7 + intensity_markers, 1.0)
else:
return min(confidence * 0.4 + intensity_markers, 1.0)
def _detect_personality_clash(self, content: str, opposite_persona: str) -> float:
"""Detect statements that clash with the opposite persona's values"""
@@ -300,13 +330,11 @@ class InterjectionScorer:
return min(score, 1.0)
def _check_cooldown(self) -> float:
"""Check cooldown and return multiplier (0.0 = blocked, 1.0 = full)"""
if not hasattr(globals, 'LAST_PERSONA_DIALOGUE_TIME'):
globals.LAST_PERSONA_DIALOGUE_TIME = 0
def _check_cooldown(self, channel_id: int) -> float:
"""Check per-channel cooldown and return multiplier (0.0 = blocked, 1.0 = full)"""
current_time = time.time()
time_since_last = current_time - globals.LAST_PERSONA_DIALOGUE_TIME
last_time = self._cooldowns.get(channel_id, 0)
time_since_last = current_time - last_time
if time_since_last < INTERJECTION_COOLDOWN_HARD:
return 0.0
@@ -315,6 +343,35 @@ class InterjectionScorer:
else:
return 1.0
def _update_cooldown(self, channel_id: int):
"""Mark a dialogue as having started in this channel"""
self._cooldowns[channel_id] = time.time()
def _check_streak(self, channel_id: int, score: float) -> bool:
"""Track near-miss interjection scores. After STREAK_THRESHOLD consecutive
near-misses, force a trigger to catch extended conversations building tension."""
if score >= INTERJECTION_THRESHOLD:
# Above threshold — reset streak (actual trigger handles it)
self._streaks[channel_id] = 0
return False
if score < STREAK_MIN_SCORE:
# Too low — reset streak
self._streaks[channel_id] = 0
return False
# Near miss — increment streak
current = self._streaks.get(channel_id, 0) + 1
self._streaks[channel_id] = current
logger.debug(f"[Streak] Channel {channel_id}: {current}/{STREAK_THRESHOLD} near-misses (score: {score:.2f})")
if current >= STREAK_THRESHOLD:
self._streaks[channel_id] = 0 # Reset after force trigger
return True
return False
# ============================================================================
# PERSONA DIALOGUE MANAGER
@@ -332,7 +389,6 @@ class PersonaDialogue:
"""
_instance = None
_sentiment_analyzer = None
def __new__(cls):
if cls._instance is None:
@@ -340,14 +396,6 @@ class PersonaDialogue:
cls._instance.active_dialogues = {}
return cls._instance
@property
def sentiment_analyzer(self):
"""Lazy load sentiment analyzer (shared with InterjectionScorer)"""
if self._sentiment_analyzer is None:
scorer = InterjectionScorer()
self._sentiment_analyzer = scorer.sentiment_analyzer
return self._sentiment_analyzer
# ========================================================================
# DIALOGUE STATE MANAGEMENT
# ========================================================================
@@ -370,7 +418,9 @@ class PersonaDialogue:
"last_speaker": None,
}
self.active_dialogues[channel_id] = state
globals.LAST_PERSONA_DIALOGUE_TIME = time.time()
# Update per-channel cooldown via the scorer
scorer = get_interjection_scorer()
scorer._update_cooldown(channel_id)
logger.info(f"Started persona dialogue in channel {channel_id}")
return state
@@ -393,25 +443,25 @@ class PersonaDialogue:
Returns delta to add to current tension score.
"""
# Sentiment analysis
base_delta = 0.0
# Natural tension decay — conversations cool off over time
base_delta = -0.03
if self.sentiment_analyzer:
try:
sentiment = self.sentiment_analyzer(response_text[:512])[0]
sentiment_score = sentiment['score']
is_negative = sentiment['label'] == 'NEGATIVE'
# Lightweight heuristic sentiment — no ML dependencies
try:
scorer = InterjectionScorer()
label, sentiment_score = scorer._get_sentiment(response_text)
is_negative = label == 'NEGATIVE'
if is_negative:
base_delta = sentiment_score * 0.15
else:
base_delta = -sentiment_score * 0.05
except Exception as e:
logger.error(f"Sentiment analysis error in tension calc: {e}")
if is_negative:
base_delta = sentiment_score * 0.15
else:
base_delta = -sentiment_score * 0.08 # Stronger cooling for positive
except Exception as e:
logger.error(f"Sentiment analysis error in tension calc: {e}")
text_lower = response_text.lower()
# Escalation patterns
# Escalation patterns (reduced weight: 0.05 per match)
escalation_patterns = {
"insult": ["idiot", "stupid", "pathetic", "fool", "naive", "worthless", "disgusting", "moron"],
"dismissive": ["whatever", "don't care", "waste of time", "not worth", "beneath me", "boring"],
@@ -420,35 +470,43 @@ class PersonaDialogue:
"challenge": ["prove it", "fight me", "make me", "i dare you", "try me"],
}
# De-escalation patterns
# De-escalation patterns (increased weight: -0.08 per match)
deescalation_patterns = {
"concession": ["you're right", "fair point", "i suppose", "maybe you have", "good point"],
"softening": ["i understand", "let's calm", "didn't mean", "sorry", "apologize"],
"deflection": ["anyway", "moving on", "whatever you say", "agree to disagree", "let's just"],
"softening": ["i understand", "let's calm", "didn't mean", "sorry", "apologize", "i hear you"],
"deflection": ["anyway", "moving on", "whatever you say", "agree to disagree", "let's just", "maybe we should"],
}
# Check escalation
for category, patterns in escalation_patterns.items():
matches = sum(1 for p in patterns if p in text_lower)
if matches > 0:
base_delta += matches * 0.08
base_delta += matches * 0.05 # Reduced from 0.08
# Check de-escalation
for category, patterns in deescalation_patterns.items():
matches = sum(1 for p in patterns if p in text_lower)
if matches > 0:
base_delta -= matches * 0.06
base_delta -= matches * 0.08 # Increased from 0.06
# Intensity multipliers
# Intensity multipliers (reduced)
exclamation_count = response_text.count('!')
caps_ratio = sum(1 for c in response_text if c.isupper()) / max(len(response_text), 1)
if exclamation_count > 2 or caps_ratio > 0.3:
base_delta *= 1.3
base_delta *= 1.2 # Reduced from 1.3
# Momentum factor
# Momentum factor (reduced)
if current_tension > 0.5:
base_delta *= 1.2
base_delta *= 1.1 # Reduced from 1.2
# Spike cooldown: if last turn had a big spike, halve this delta
# (prevents runaway tension spirals from a single heated exchange)
if hasattr(self, '_last_tension_delta') and abs(self._last_tension_delta) > 0.15:
base_delta *= 0.5
logger.debug(f"[Tension] Spike cooldown active — delta halved to {base_delta:+.3f}")
self._last_tension_delta = base_delta
return base_delta
@@ -461,10 +519,13 @@ class PersonaDialogue:
channel: discord.TextChannel,
responding_persona: str,
context: str,
turn_count: int = 0,
) -> tuple:
"""
Generate response AND continuation signal in a single LLM call.
Args:
turn_count: Current dialogue turn number (for question-override decay)
Returns:
Tuple of (response_text, should_continue, confidence)
"""
@@ -485,22 +546,21 @@ Respond naturally as yourself. Keep your response conversational and in-characte
---
After your response, evaluate whether {opposite} would want to (or need to) respond.
After your response, evaluate whether {opposite} would want to keep talking.
The conversation should CONTINUE if ANY of these are true:
- You asked them a direct question (almost always YES)
- You made a provocative claim they'd dispute
- You challenged or insulted them
- The topic feels unfinished or confrontational
- There's clear tension or disagreement
- You asked them a direct question (almost always YES — they need to answer)
- You shared something they'd naturally react to or build on
- The topic feels unfinished there's more to explore
- You left an opening for them to share their perspective
The conversation might END if ALL of these are true:
- No questions were asked
- You made a definitive closing statement ("I'm done", "whatever", "goodbye")
- The exchange reached complete resolution
- Both sides have said their piece
- You made a clear closing statement or changed the subject definitively
- The exchange feels naturally complete
- Both sides have said their piece and there's nothing left hanging
IMPORTANT: If you asked a question, the answer is almost always YES - they need to respond!
IMPORTANT: This is a CONVERSATION, not a debate. Let it flow naturally. If you asked a question, the answer is almost always YES they need to respond!
On a new line after your response, write:
[CONTINUE: YES or NO] [CONFIDENCE: HIGH, MEDIUM, or LOW]"""
@@ -522,11 +582,11 @@ On a new line after your response, write:
return None, False, "LOW"
# Parse response and signal
response_text, should_continue, confidence = self._parse_response(raw_response)
response_text, should_continue, confidence = self._parse_response(raw_response, turn_count=turn_count)
return response_text, should_continue, confidence
def _parse_response(self, raw_response: str) -> tuple:
def _parse_response(self, raw_response: str, turn_count: int = 0) -> tuple:
"""Extract response text and continuation signal"""
lines = raw_response.strip().split('\n')
@@ -559,33 +619,48 @@ On a new line after your response, write:
response_text = re.sub(r'\[CONFIDENCE:\s*(HIGH|MEDIUM|LOW)\]', '', response_text)
response_text = response_text.strip()
# Override: If the response contains a question mark, always continue
# Question override: if someone asks a question, the other should answer.
# But decay this after 6 turns to prevent infinite question-ping-pong.
if '?' in response_text:
logger.debug(f"[Parse Override] Question detected, forcing continue=YES")
should_continue = True
if confidence == "LOW":
confidence = "MEDIUM"
if turn_count <= 6:
logger.debug(f"[Parse Override] Question detected, forcing continue=YES (turn {turn_count})")
should_continue = True
if confidence == "LOW":
confidence = "MEDIUM"
else:
logger.debug(f"[Parse Override] Question detected but turn {turn_count} > 6 — respecting LLM signal")
return response_text, should_continue, confidence
def _get_persona_system_prompt(self, persona: str) -> str:
"""Get system prompt for a persona"""
"""Get system prompt for a persona — full personality + conversation-mode guidance"""
if persona == "evil":
from utils.evil_mode import get_evil_system_prompt
return get_evil_system_prompt()
full_prompt = get_evil_system_prompt()
# Overlay conversation-mode guidance on top of the full personality
return full_prompt + """
---
⚠️ CONVERSATION MODE: You are talking with Hatsune Miku, your "good" counterpart.
This is a CONVERSATION, not an argument. You can be snarky, dark, and sarcastic — that's who you are — but you're not here to fight. You're here to TALK.
CRITICAL CONVERSATION RULES:
- If she asks you a question, ANSWER it. Don't deflect or dismiss — actually respond.
- Engage with what she says. Build on her points, don't just swat them away.
- Show curiosity about her perspective. Ask her questions too.
- You can be dark and snarky while still having a real conversation.
- Reveal something about yourself occasionally — a crack in the darkness.
- Do NOT just repeat dismissive one-liners. That's boring, and you're not boring.
- Do NOT launch into full attack mode or try to "win" — this isn't a competition."""
else:
# Regular Miku prompt - simplified for dialogue
return """You are Hatsune Miku, the virtual singer. You are in a conversation with your alter ego, Evil Miku.
from utils.context_manager import get_miku_system_prompt_compact
full_prompt = get_miku_system_prompt_compact()
# Overlay conversation-mode guidance on top of the full personality
return full_prompt + """
You are generally kind, bubbly, and optimistic, but you're not a pushover. You can be:
- Assertive when defending your values
- Frustrated when she's being cruel
- Curious about her perspective
- Hopeful that you can find common ground
- Playful when the mood allows
Respond naturally and conversationally. Keep responses concise (1-3 sentences typically).
You can use emojis naturally! ✨💙"""
---
⚠️ CONVERSATION MODE: You are talking with Evil Miku, your dark alter ego.
This is a CONVERSATION, not an argument. Be yourself — kind, bubbly, optimistic — but you're not here to fight or defend your existence. Ask genuine questions. Share your feelings without attacking hers. Find common ground. Be curious, not defensive. Do NOT lecture her about being "good" or try to "fix" her. Just TALK. ✨💙"""
# ========================================================================
# DIALOGUE TURN HANDLING
@@ -626,6 +701,7 @@ You can use emojis naturally! ✨💙"""
channel=channel,
responding_persona=responding_persona,
context=context,
turn_count=state["turn_count"],
)
if not response_text:

View File

@@ -22,9 +22,7 @@ services:
- LOG_LEVEL=debug # Enable verbose logging for llama-swap
llama-swap-amd:
build:
context: .
dockerfile: Dockerfile.llamaswap-rocm
image: ghcr.io/mostlygeek/llama-swap:rocm
container_name: llama-swap-amd
ports:
- "8091:8080" # Map host port 8091 to container port 8080
@@ -35,9 +33,6 @@ services:
devices:
- /dev/kfd:/dev/kfd
- /dev/dri:/dev/dri
group_add:
- "985" # video group
- "989" # render group
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]

View File

@@ -5,7 +5,7 @@ models:
# Main text generation model (Llama 3.1 8B)
# Custom chat template to disable built-in tool calling
llama3.1:
cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-8B-Instruct-UD-Q4_K_XL.gguf -ngl 99 -c 16384 --host 0.0.0.0 --no-warmup --flash-attn on --chat-template-file /app/llama31_notool_template.jinja
cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-8B-Instruct-UD-Q4_K_XL.gguf -ngl 99 -c 16384 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0 --chat-template-file /app/llama31_notool_template.jinja
ttl: 1800 # Unload after 30 minutes of inactivity (1800 seconds)
swap: true # CRITICAL: Unload other models when loading this one
aliases:
@@ -14,7 +14,7 @@ models:
# Evil/Uncensored text generation model (DarkIdol-Llama 3.1 8B)
darkidol:
cmd: /app/llama-server --port ${PORT} --model /models/DarkIdol-Llama-3.1-8B-Instruct-1.3-Uncensored_Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 --no-warmup --flash-attn on
cmd: /app/llama-server --port ${PORT} --model /models/DarkIdol-Llama-3.1-8B-Instruct-1.3-Uncensored_Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
ttl: 1800 # Unload after 30 minutes of inactivity
swap: true # CRITICAL: Unload other models when loading this one
aliases:
@@ -24,7 +24,7 @@ models:
# Japanese language model (Llama 3.1 Swallow - Japanese optimized)
swallow:
cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-Swallow-8B-Instruct-v0.5-Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 --no-warmup --flash-attn on
cmd: /app/llama-server --port ${PORT} --model /models/Llama-3.1-Swallow-8B-Instruct-v0.5-Q4_K_M.gguf -ngl 99 -c 16384 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
ttl: 1800 # Unload after 30 minutes of inactivity
swap: true # CRITICAL: Unload other models when loading this one
aliases:
@@ -34,7 +34,7 @@ models:
# Vision/Multimodal model (MiniCPM-V-4.5 - supports images, video, and GIFs)
vision:
cmd: /app/llama-server --port ${PORT} --model /models/MiniCPM-V-4_5-Q3_K_S.gguf --mmproj /models/MiniCPM-V-4_5-mmproj-f16.gguf -ngl 99 -c 4096 --host 0.0.0.0 --no-warmup --flash-attn on
cmd: /app/llama-server --port ${PORT} --model /models/MiniCPM-V-4_5-Q3_K_S.gguf --mmproj /models/MiniCPM-V-4_5-mmproj-f16.gguf -ngl 99 -c 4096 --host 0.0.0.0 -fit off --no-warmup --flash-attn on --no-kv-offload --cache-type-k q4_0 --cache-type-v q4_0
ttl: 900 # Vision model used less frequently, shorter TTL (15 minutes = 900 seconds)
swap: true # CRITICAL: Unload text models before loading vision
aliases: