8ball leon: switch to api/chat so system persona actually sticks
Lint / Shell (shellcheck) (push) Successful in 22s
Lint / JS (eslint) (push) Successful in 15s
Lint / Python (ruff) (push) Successful in 21s
Lint / Python deps (pip-audit) (push) Successful in 2m2s
Lint / Secret scan (gitleaks) (push) Successful in 12s

api/generate has no system role — the model was ignoring the character
context and giving generic one-word answers. Chat API with a proper
system message forces the Leon voice.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-22 11:00:59 -04:00
parent 868bca6494
commit 40e921a9da
+20 -13
View File
@@ -249,34 +249,41 @@ async def cmd_8ball(client: AsyncClient, room_id: str, sender: str, args: str):
if sender == LEON_ID: if sender == LEON_ID:
question = sanitize_input(args) question = sanitize_input(args)
q_for_prompt = question q_for_prompt = question
bio_context = _LEON_LORE + " " system_msg = (
prompt = ( "You are a magic 8-ball channeling the spirit of Leon S. Kennedy from Resident Evil. "
bio_context + + _LEON_LORE + " "
"You are a magic 8-ball responding to Leon S. Kennedy from Resident Evil. " "Answer in Leon's voice — dry, cool, slightly dark, with a sardonic edge. "
"Answer his question in the tone and style of the Resident Evil universe — dramatic, slightly dark, " "Weave in his world when it fits: surviving impossible odds, bioweapons, Ada Wong's betrayals, government conspiracies. "
"with the dry wit and cool-under-pressure attitude Leon is known for. " "Rules: one sentence only, give only the prediction, no meta-commentary, no 'I think', no questions back. "
"Reference his world (bioweapons, government ops, Ada Wong, survival) if relevant. " "Make it feel like a fortune that Leon would actually say or think."
"One sentence only. Give only your prediction — no questions back, no disclaimers.\n\n"
f"Question: {q_for_prompt}"
) )
fallback_leon = random.choice([ fallback_leon = random.choice([
"The signs point to danger ahead — but you've handled worse.", "The signs point to danger ahead — but you've handled worse.",
"Outlook unclear. Better stock up on ammo just in case.", "Outlook unclear. Better stock up on ammo just in case.",
"It is certain — but so was Raccoon City, and look how that turned out.", "It is certain — but so was Raccoon City, and look how that turned out.",
"Signs point to yes. Ada probably already knew.", "Signs point to yes. Ada probably already knew.",
"Don't count on it. You should know by now that nothing goes according to plan.", "Don't count on it. Nothing ever goes according to plan.",
"Definitely. Now stop standing around and move.", "Definitely. Now stop standing around and move.",
"You already know the answer — you just don't want to hear it.",
"Outlook not so great, but you've survived worse odds.",
]) ])
used_llm = False used_llm = False
try: try:
timeout = aiohttp.ClientTimeout(total=30) timeout = aiohttp.ClientTimeout(total=30)
async with aiohttp.ClientSession(timeout=timeout) as session: async with aiohttp.ClientSession(timeout=timeout) as session:
async with session.post( async with session.post(
f"{OLLAMA_URL}/api/generate", f"{OLLAMA_URL}/api/chat",
json={"model": BALL_MODEL, "prompt": prompt, "stream": False}, json={
"model": BALL_MODEL,
"stream": False,
"messages": [
{"role": "system", "content": system_msg},
{"role": "user", "content": f"Question: {q_for_prompt}"},
],
},
) as response: ) as response:
data = await response.json() data = await response.json()
raw = _normalize_caps(data.get("response", "").strip()) raw = _normalize_caps(data.get("message", {}).get("content", "").strip())
if _is_valid_8ball_response(raw): if _is_valid_8ball_response(raw):
answer = raw answer = raw
used_llm = True used_llm = True