ask: switch to llama3.2:latest, increase timeout to 120s
gemma3:latest produces garbage output on the Vulkan backend (Intel Arc A380). llama3.2:latest runs correctly at 100% GPU. Timeout bumped to 120s to handle cold model loads (~22s) without timing out. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -873,7 +873,7 @@ async def cmd_ask(client: AsyncClient, room_id: str, sender: str, args: str):
|
||||
await send_text(client, room_id, "Thinking...")
|
||||
|
||||
try:
|
||||
timeout = aiohttp.ClientTimeout(total=90)
|
||||
timeout = aiohttp.ClientTimeout(total=120)
|
||||
async with aiohttp.ClientSession(timeout=timeout) as session:
|
||||
async with session.post(
|
||||
f"{OLLAMA_URL}/api/chat",
|
||||
|
||||
Reference in New Issue
Block a user