ask: switch to llama3.2:latest, increase timeout to 120s
gemma3:latest produces garbage output on the Vulkan backend (Intel Arc A380). llama3.2:latest runs correctly at 100% GPU. Timeout bumped to 120s to handle cold model loads (~22s) without timing out. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
+1
-1
@@ -20,7 +20,7 @@ LOG_LEVEL = os.getenv("LOG_LEVEL", "INFO")
|
||||
OLLAMA_URL = os.getenv("OLLAMA_URL", "http://10.10.10.157:11434")
|
||||
OLLAMA_MODEL = os.getenv("OLLAMA_MODEL", "llama3.2:latest")
|
||||
BALL_MODEL = os.getenv("BALL_MODEL", "sadiq-bd/llama3.2-1b-uncensored:latest")
|
||||
ASK_MODEL = os.getenv("ASK_MODEL", "gemma3:latest")
|
||||
ASK_MODEL = os.getenv("ASK_MODEL", "llama3.2:latest")
|
||||
MINECRAFT_RCON_HOST = os.getenv("MINECRAFT_RCON_HOST", "10.10.10.67")
|
||||
MINECRAFT_RCON_PORT = int(os.getenv("MINECRAFT_RCON_PORT", "25575"))
|
||||
MINECRAFT_RCON_PASSWORD = os.getenv("MINECRAFT_RCON_PASSWORD", "")
|
||||
|
||||
Reference in New Issue
Block a user