- Added _extract_riddle_answer() with dual fallback: JSON parse first,
then regex extraction of quoted riddle/answer values directly from text
- _generate_riddle() now retries up to 2 times on parse/network failure
- Hangman, scramble, WYR, and trivia now catch JSONDecodeError and log
the raw model output instead of letting the exception propagate silently
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
riddle_cache.json: stores last 30 riddle texts + answers
trivia_cache.json: stores last 20 questions per category
Both files are capped at their respective maxes so they never grow
unboundedly. Loaded on startup, saved after each new question.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
It's used for 8ball, roasts, riddles, WYR, and debate — not just the
magic 8-ball anymore. CREATIVE_MODEL better reflects its role as the
uncensored/abliterated model for creative generation tasks.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
wyr:
- Reject options that end on a dangling word (but/and/or/with/never etc.)
so truncated sentences like 'but never' return None and retry
- Add 'via Llama 3.2 3B (abliterated)' credit to the poll message
riddle:
- Add 'via Llama 3.2 3B (abliterated)' credit to the riddle message
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The previous system prompt was basically empty. Now it explicitly:
- Requires the answer to be unambiguously correct
- Bans vague, ambiguous, or invented facts
- Requires plausible-but-wrong distractors
- Includes a concrete example of a good question
- Tells the model to pick a simpler topic if unsure
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
phi4-mini is too conservative and defaults to the same 2-3 answers.
Use BALL_MODEL (abliterated Llama 3.2) like WYR does.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
riddle:
- Cache answers separately so the same answer (e.g. 'shadow') can't
appear twice in a session even if the riddle text differs
- Explicitly ban 'shadow' in the prompt and append avoid-answers clause
- Ban question endings ('what am I?', 'what could it be?') more strictly
wyr:
- Hard-cap options at 10 words server-side so the model can't ignore
the word limit and generate paragraph-length options
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add 3 assistant-turn examples to lock in the JSON format and tone
- Construct the 'question' field from option_a/option_b so it's always
well-formed regardless of what the model puts in the 'question' key
- Switch from phi4-mini to the abliterated Llama 3.2 model for edgier,
uncensored dilemmas
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
riddle:
- Tighten generation prompt with explicit rules: specific noun answer,
no answer word in the riddle, no 'what could it possibly mean', clues
must logically point to ONE answer, prefer concrete things
- Fix answer matching: strip articles (a/an/the), allow partial match
so 'person' hits 'a person' and 'shadow' hits 'my shadow' etc.
wyr:
- Prompt now asks for genuinely difficult dilemmas with real downsides
on both sides; explicitly bans boring options like dolphins/karaoke
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Keep a rolling list of the last 30 riddles used and inject them into
the prompt as an avoid clause, same pattern as trivia's per-category cache.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
nio has a dedicated ReactionEvent type with .reacts_to and .key attributes.
The callback was registered for UnknownEvent so reaction events were silently
dropped. Register for ReactionEvent and use its native attributes; keep the
UnknownEvent fallback for edge cases.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add edit_html() to utils using m.replace so messages can be updated
- Hangman board now edits in place on every guess — shows progressing
ASCII figure as wrong guesses accumulate instead of spamming new messages
- Extract _hangman_board_html() helper for consistent board rendering
- wyr: add INFO-level logging to reaction callback to diagnose vote tracking
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When Jared asks about a @mentioned third party, give a neutral honest
prediction instead of hijacking the answer to be about Jared.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The oracle should address Leon ('you survived Raccoon City...') not
impersonate him ('I'm not buying it').
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
api/generate has no system role — the model was ignoring the character
context and giving generic one-word answers. Chat API with a proper
system message forces the Leon voice.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- _hangman_display compared uppercase word chars against lowercase
guessed_letters set, so letters were never revealed after correct guesses
- Word guess wrong path now shows the board and remaining guesses
- Winner display now includes the guesser's name on correct word guess
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add _WYR_POLLS dict keyed by poll event_id to accumulate votes
- record_wyr_vote() called from callbacks.reaction() on every reaction
- reveal() reads actual vote counts and announces winner with percentage
- Handles tie and zero-vote cases
- Remove the useless 'check the reactions above' message
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
phi4-mini can queue behind other requests and take >20s under load,
causing TimeoutError and silent failures in wyr/riddle/hangman/scramble.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
hangman, scramble, riddle, and wyr all used api/generate which has no
system role. The model would wrap JSON in prose or markdown fences,
causing json.loads() to throw and the command to silently die after
the 'Generating...' message.
Fix for all four: switch to api/chat with a system message enforcing
raw JSON output, strip markdown fences, and use regex to extract the
JSON object even if surrounded by extra text.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Switch to api/chat with a system prompt for better JSON compliance,
and use regex extraction to find the JSON object even if the model
wraps it in extra text or markdown fences.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Switch from api/generate to api/chat so we can set a system role that
instructs the model to be genuinely savage. Add a few-shot example so
it knows what a roast looks like vs a backhanded compliment.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Reframe prompt as a consented comedy roast between friends so the
model doesn't refuse on safety grounds
- Add lore for lonely (Cole, 23, dishwasher, gamer) and
natcofragomatic (Nathan, DCO Tech 3 at AWS, ginger, tape-drive nerd)
- Use a lookup table (_ROAST_LORE) so adding new users is one line
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- !hangman: AI picks a 5-8 letter word with hint; players !guess letters/words, 6 wrong = dead
- !scramble: AI picks a word, scrambles it; first correct answer in chat wins (45s timeout)
- !wyr: AI generates Would You Rather with 🅰️/🅱️ reaction voting, 30s reveal
- !riddle: AI generates riddle monitored for 60s, substring match in chat wins
- !roast: AI roasts a target using BALL_MODEL with special Jared/Wynter lore
- !story: collaborative story with !story add <line> and !story end (AI conclusion, max 10 lines)
- !debate: AI writes FOR/AGAINST arguments for any topic using ASK_MODEL
- callbacks.py: route all non-command messages through scramble/riddle answer checkers
- help: updated categories to include all new commands
- Replace flat fallback list with per-category fallback dict so
!trivia music never shows a gaming question when AI is down
- Always show "via <model>" tag on AI questions; show warning tag
on static fallbacks so users know AI was unavailable
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
New categories: anime, sports, food, history, geography, nature,
mythology, tv (14 total).
Add _trivia_recent dict that tracks the last 20 questions per
category and injects them into the LLM prompt as a avoid list,
preventing duplicate questions within a session.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
When Wynter asks a romantic question about Jared ("is he in love
with me", "does he miss me", etc.) the LLM fallback now explicitly
denies the premise instead of giving a generic Jared-wins response.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add explicit Jared/Wynter no-romance lore to all four branch
bio_contexts and prompts — prevents model from implying romantic
feelings between them
- Add _implies_jared_wynter_romance() validator; responses that
suggest romantic connection fall back to the static fallback
- Replace random-list responses for non-Jared/Wynter senders with
AI-generated magic 8-ball predictions via BALL_MODEL
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The python-build-standalone tarball ships pip 24.1.2 and setuptools
70.3.0 which have known CVEs. Upgrade them first so --local audit
only sees current, patched versions.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The standalone Python 3.10 binary's venv ensurepip step exits 127.
Workaround: install requirements + pip-audit into the same env,
then audit with --local (no internal venv creation).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Debian Bullseye only ships Python 3.9 and python3.10 is not in its
repos. python-dotenv 1.2.2 (vuln fix) requires Python >=3.10.
Use indygreg/python-build-standalone to get a self-contained Python
3.10.15 binary that works on any glibc Linux runner.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
lotusllm, lotusllmben, and llama3.3 70B have been removed from
Ollama on LXC 130 to free ~44 GB disk space.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- BALL_MODEL: huihui_ai/llama3.2-abliterate:3b (abliterated 3B,
follows complex persona instructions without censorship)
- ASK_MODEL + OLLAMA_MODEL: phi4-mini:latest (Phi-4 Mini 3.8B,
best instruction-following model available within GPU VRAM)
- Update _MODEL_DISPLAY for new model names
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Fix about_jared/about_wynter using substring match — "they" matched
"he", "theme" matched "he", etc., routing Wynter's questions to the
wrong branch. Now uses \b word boundaries via re.search.
- Switch BALL_MODEL default from sadiq-bd 1B uncensored to
llama3.2:latest (3B) — the 1B model hallucinates, ignores persona
instructions, and mentions Jared randomly. GPU is now working on
Arc A380 at ~25 tok/s so the larger model is practical.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
gemma3:latest produces garbage output on the Vulkan backend (Intel Arc A380).
llama3.2:latest runs correctly at 100% GPU. Timeout bumped to 120s to handle
cold model loads (~22s) without timing out.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
8ball is only AI-powered for specific users (Wynter/Jared); for everyone
else it's a random static response. Games is the correct category.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Model attribution is now only shown when the LLM actually generated the
response. If the model refused or gave an invalid answer and we fell back
to the static response, no 'via ...' line is shown.
Fallback responses for all three Wynter branches are now randomised pools
so the bot doesn't always give the same flat yes/no phrase regardless of
what Wynter actually typed.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
help: grouped into AI / Games / Random / Server categories with Option B
purple header; descriptions auto-pulled from the command registry.
Model attribution: added _MODEL_DISPLAY map so 'via lotusllm' becomes
'via Llama 3.2 1B', 'via gemma3:latest' becomes 'via Gemma 3 4B', etc.
Config: OLLAMA_MODEL switched from lotusllm to llama3.2:latest; added
BALL_MODEL (sadiq-bd/llama3.2-1b-uncensored) as a dedicated config var
for the 8ball so it stays on the uncensored model without affecting fortune.
Descriptions: fortune -> AI-generated fortune cookie; ask -> Ask LotusBot;
health -> Bot health & stats (admin only).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Responding 'Wynter is too busy...' in third person to someone who just
asked 'will I...' feels disconnected. Changed the prompt to speak
directly to Wynter using you/your, with her name used only for emphasis.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
The LLM was responding with 'She's far too busy...' instead of using
'Wynter' by name. Added explicit instruction to both Wynter branches
to always refer to her by name and never use she/her pronouns.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
8ball: color-coded answer text (green=positive, red=negative, amber=neutral)
for both the random and Jared/Wynter AI branches; question shown as small
italic below the answer; AI responses include model attribution.
fortune: teal header, answer in blockquote italics, model attribution shown
only when response came from the LLM (not the static fallback list).
ask: purple header, question in italic, response in blockquote, model
attribution at bottom.
trivia: blue header with category, green reveal answer, model attribution
shown only for LLM-generated questions (not static fallbacks).
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>