ask: switch to llama3.2:latest, increase timeout to 120s
Lint / Shell (shellcheck) (push) Successful in 12s
Lint / JS (eslint) (push) Successful in 8s
Lint / Python (ruff) (push) Successful in 5s
Lint / Python deps (pip-audit) (push) Successful in 1m10s
Lint / Secret scan (gitleaks) (push) Successful in 5s

gemma3:latest produces garbage output on the Vulkan backend (Intel Arc A380).
llama3.2:latest runs correctly at 100% GPU. Timeout bumped to 120s to handle
cold model loads (~22s) without timing out.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-04-20 22:49:08 -04:00
parent 1ba1151673
commit f7ca1b00db
2 changed files with 2 additions and 2 deletions
+1 -1
View File
@@ -873,7 +873,7 @@ async def cmd_ask(client: AsyncClient, room_id: str, sender: str, args: str):
await send_text(client, room_id, "Thinking...")
try:
timeout = aiohttp.ClientTimeout(total=90)
timeout = aiohttp.ClientTimeout(total=120)
async with aiohttp.ClientSession(timeout=timeout) as session:
async with session.post(
f"{OLLAMA_URL}/api/chat",
+1 -1
View File
@@ -20,7 +20,7 @@ LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO")
OLLAMA_URL = os.getenv("OLLAMA_URL", "http://10.10.10.157:11434")
OLLAMA_MODEL = os.getenv("OLLAMA_MODEL", "llama3.2:latest")
BALL_MODEL = os.getenv("BALL_MODEL", "sadiq-bd/llama3.2-1b-uncensored:latest")
ASK_MODEL = os.getenv("ASK_MODEL", "gemma3:latest")
ASK_MODEL = os.getenv("ASK_MODEL", "llama3.2:latest")
MINECRAFT_RCON_HOST = os.getenv("MINECRAFT_RCON_HOST", "10.10.10.67")
MINECRAFT_RCON_PORT = int(os.getenv("MINECRAFT_RCON_PORT", "25575"))
MINECRAFT_RCON_PASSWORD = os.getenv("MINECRAFT_RCON_PASSWORD", "")