8ball leon: switch to api/chat so system persona actually sticks
Lint / Shell (shellcheck) (push) Successful in 22s
Lint / JS (eslint) (push) Successful in 15s
Lint / Python (ruff) (push) Successful in 21s
Lint / Python deps (pip-audit) (push) Successful in 2m2s
Lint / Secret scan (gitleaks) (push) Successful in 12s

api/generate has no system role — the model was ignoring the character
context and giving generic one-word answers. Chat API with a proper
system message forces the Leon voice.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-22 11:00:59 -04:00
parent 868bca6494
commit 40e921a9da
+20 -13
View File
@@ -249,34 +249,41 @@ async def cmd_8ball(client: AsyncClient, room_id: str, sender: str, args: str):
if sender == LEON_ID:
question = sanitize_input(args)
q_for_prompt = question
bio_context = _LEON_LORE + " "
prompt = (
bio_context +
"You are a magic 8-ball responding to Leon S. Kennedy from Resident Evil. "
"Answer his question in the tone and style of the Resident Evil universe — dramatic, slightly dark, "
"with the dry wit and cool-under-pressure attitude Leon is known for. "
"Reference his world (bioweapons, government ops, Ada Wong, survival) if relevant. "
"One sentence only. Give only your prediction — no questions back, no disclaimers.\n\n"
f"Question: {q_for_prompt}"
system_msg = (
"You are a magic 8-ball channeling the spirit of Leon S. Kennedy from Resident Evil. "
+ _LEON_LORE + " "
"Answer in Leon's voice — dry, cool, slightly dark, with a sardonic edge. "
"Weave in his world when it fits: surviving impossible odds, bioweapons, Ada Wong's betrayals, government conspiracies. "
"Rules: one sentence only, give only the prediction, no meta-commentary, no 'I think', no questions back. "
"Make it feel like a fortune that Leon would actually say or think."
)
fallback_leon = random.choice([
"The signs point to danger ahead — but you've handled worse.",
"Outlook unclear. Better stock up on ammo just in case.",
"It is certain — but so was Raccoon City, and look how that turned out.",
"Signs point to yes. Ada probably already knew.",
"Don't count on it. You should know by now that nothing goes according to plan.",
"Don't count on it. Nothing ever goes according to plan.",
"Definitely. Now stop standing around and move.",
"You already know the answer — you just don't want to hear it.",
"Outlook not so great, but you've survived worse odds.",
])
used_llm = False
try:
timeout = aiohttp.ClientTimeout(total=30)
async with aiohttp.ClientSession(timeout=timeout) as session:
async with session.post(
f"{OLLAMA_URL}/api/generate",
json={"model": BALL_MODEL, "prompt": prompt, "stream": False},
f"{OLLAMA_URL}/api/chat",
json={
"model": BALL_MODEL,
"stream": False,
"messages": [
{"role": "system", "content": system_msg},
{"role": "user", "content": f"Question: {q_for_prompt}"},
],
},
) as response:
data = await response.json()
raw = _normalize_caps(data.get("response", "").strip())
raw = _normalize_caps(data.get("message", {}).get("content", "").strip())
if _is_valid_8ball_response(raw):
answer = raw
used_llm = True