diff --git a/README.md b/README.md index c9a1b3d..56dca34 100644 --- a/README.md +++ b/README.md @@ -1,240 +1,375 @@ # PULSE - Pipelined Unified Logic & Server Engine -A distributed workflow orchestration platform for managing and executing complex multi-step operations across server clusters through an intuitive web interface. +A distributed workflow orchestration platform for managing and executing complex multi-step operations across server clusters through a retro terminal-themed web interface. ## Overview -PULSE is a centralized workflow execution system designed to orchestrate operations across distributed infrastructure. It provides a powerful web-based interface for defining, managing, and executing workflows that can span multiple servers, require human interaction, and perform complex automation tasks at scale. +PULSE is a centralized workflow execution system designed to orchestrate operations across distributed infrastructure. It provides a powerful web-based interface with a vintage CRT terminal aesthetic for defining, managing, and executing workflows that can span multiple servers, require human interaction, and perform complex automation tasks at scale. ### Key Features -- **Interactive Workflow Management**: Define and execute multi-step workflows with conditional logic, user prompts, and decision points -- **Distributed Execution**: Run commands and scripts across multiple worker nodes simultaneously -- **High Availability Architecture**: Deploy redundant worker nodes in LXC containers with Ceph storage for fault tolerance -- **Web-Based Control Center**: Intuitive interface for workflow selection, monitoring, and interactive input -- **Flexible Worker Pool**: Scale horizontally by adding worker nodes as needed -- **Real-Time Monitoring**: Track workflow progress, view logs, and receive notifications +- **🎨 Retro Terminal Interface**: Phosphor green CRT-style interface with scanlines, glow effects, and ASCII art +- **⚑ Quick Command Execution**: Instantly execute commands on any worker with built-in templates and command history +- **πŸ“Š Real-Time Worker Monitoring**: Live system metrics including CPU, memory, load average, and active tasks +- **πŸ”„ Interactive Workflow Management**: Define and execute multi-step workflows with conditional logic and user prompts +- **🌐 Distributed Execution**: Run commands across multiple worker nodes simultaneously via WebSocket +- **πŸ“ˆ Execution Tracking**: Comprehensive logging with formatted output, re-run capabilities, and JSON export +- **πŸ” SSO Authentication**: Seamless integration with Authelia for enterprise authentication +- **🧹 Auto-Cleanup**: Automatic removal of old executions with configurable retention policies +- **πŸ”” Terminal Notifications**: Audio beeps and visual toasts for command completion events ## Architecture PULSE consists of two core components: ### PULSE Server +**Location:** `10.10.10.65` (LXC Container ID: 122) +**Directory:** `/opt/pulse-server` + The central orchestration hub that: -- Hosts the web interface for workflow management +- Hosts the retro terminal web interface - Manages workflow definitions and execution state -- Coordinates task distribution to worker nodes -- Handles user interactions and input collection +- Coordinates task distribution to worker nodes via WebSocket +- Handles user interactions through Authelia SSO - Provides real-time status updates and logging +- Stores all data in MariaDB database + +**Technology Stack:** +- Node.js 20.x +- Express.js (web framework) +- WebSocket (ws package) for real-time bidirectional communication +- MySQL2 (MariaDB driver) +- Authelia SSO integration ### PULSE Worker +**Example:** `10.10.10.151` (LXC Container ID: 153, hostname: pulse-worker-01) +**Directory:** `/opt/pulse-worker` + Lightweight execution agents that: -- Connect to the PULSE server and await task assignments -- Execute commands, scripts, and code on target infrastructure -- Report execution status and results back to the server -- Support multiple concurrent workflow executions -- Automatically reconnect and resume on failure +- Connect to PULSE server via WebSocket with heartbeat monitoring +- Execute shell commands and report results in real-time +- Provide system metrics (CPU, memory, load, uptime) +- Support concurrent task execution with configurable limits +- Automatically reconnect on connection loss + +**Technology Stack:** +- Node.js 20.x +- WebSocket client +- Child process execution +- System metrics collection + ``` -β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” -β”‚ PULSE Server β”‚ -β”‚ (Web Interface) β”‚ -β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ - β”‚ - β”Œβ”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” - β”‚ β”‚ β”‚ β”‚ -β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β” -β”‚ Worker β”‚ β”‚ Worker β”‚ β”‚ Worker β”‚ β”‚ Worker β”‚ -β”‚ Node 1 β”‚ β”‚ Node 2 β”‚ β”‚ Node 3 β”‚ β”‚ Node N β”‚ -β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ - LXC Containers in Proxmox with Ceph +β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” +β”‚ PULSE Server (10.10.10.65) β”‚ +β”‚ Terminal Web Interface + API β”‚ +β”‚ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ +β”‚ β”‚ MariaDB β”‚ β”‚ Authelia β”‚ β”‚ +β”‚ β”‚ Database β”‚ β”‚ SSO β”‚ β”‚ +β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ +β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ + β”‚ WebSocket + β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” + β”‚ β”‚ β”‚ +β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β–Όβ”€β”€β”€β”€β” β”Œβ”€β”€β–Όβ”€β”€β”€β”€β”€β” +β”‚ Worker 1 β”‚ β”‚Worker 2β”‚ β”‚Worker Nβ”‚ +β”‚10.10.10.151β”‚ β”‚ ... β”‚ β”‚ ... β”‚ +β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”˜ + LXC Containers in Proxmox with Ceph ``` -## Deployment +## Installation ### Prerequisites -- **Proxmox VE Cluster**: Hypervisor environment for container deployment -- **Ceph Storage**: Distributed storage backend for high availability -- **LXC Support**: Container runtime for worker node deployment -- **Network Connectivity**: Communication between server and workers +- **Node.js 20.x** or higher +- **MariaDB 10.x** or higher +- **Authelia** configured for SSO (optional but recommended) +- **Network Connectivity** between server and workers -### Installation +### PULSE Server Setup -#### PULSE Server ```bash -# Clone the repository -git clone https://github.com/yourusername/pulse.git -cd pulse +# Clone repository +cd /opt +git clone pulse-server +cd pulse-server # Install dependencies -npm install # or pip install -r requirements.txt +npm install -# Configure server settings -cp config.example.yml config.yml -nano config.yml +# Create .env file with configuration +cat > .env << EOF +# Server Configuration +PORT=8080 +SECRET_KEY=your-secret-key-here -# Start the PULSE server -npm start # or python server.py +# MariaDB Configuration +DB_HOST=10.10.10.50 +DB_PORT=3306 +DB_NAME=pulse +DB_USER=pulse_user +DB_PASSWORD=your-db-password + +# Worker API Key (for worker authentication) +WORKER_API_KEY=your-worker-api-key + +# Auto-cleanup configuration (optional) +EXECUTION_RETENTION_DAYS=30 +EOF + +# Create systemd service +cat > /etc/systemd/system/pulse.service << EOF +[Unit] +Description=PULSE Workflow Orchestration Server +After=network.target + +[Service] +Type=simple +User=root +WorkingDirectory=/opt/pulse-server +ExecStart=/usr/bin/node server.js +Restart=always +RestartSec=10 + +[Install] +WantedBy=multi-user.target +EOF + +# Start service +systemctl daemon-reload +systemctl enable pulse.service +systemctl start pulse.service ``` -#### PULSE Worker +### PULSE Worker Setup + ```bash -# On each worker node (LXC container) +# On each worker node +cd /opt +git clone pulse-worker cd pulse-worker # Install dependencies -npm install # or pip install -r requirements.txt +npm install -# Configure worker connection -cp worker-config.example.yml worker-config.yml -nano worker-config.yml +# Create .env file +cat > .env << EOF +# Worker Configuration +WORKER_NAME=pulse-worker-01 +PULSE_SERVER=http://10.10.10.65:8080 +PULSE_WS=ws://10.10.10.65:8080 +WORKER_API_KEY=your-worker-api-key -# Start the worker daemon -npm start # or python worker.py -``` +# Performance Settings +HEARTBEAT_INTERVAL=30 +MAX_CONCURRENT_TASKS=5 +EOF -### High Availability Setup +# Create systemd service +cat > /etc/systemd/system/pulse-worker.service << EOF +[Unit] +Description=PULSE Worker Node +After=network.target -Deploy multiple worker nodes across Proxmox hosts: -```bash -# Create LXC template -pct create 1000 local:vztmpl/ubuntu-22.04-standard_amd64.tar.zst \ - --rootfs ceph-pool:8 \ - --memory 2048 \ - --cores 2 \ - --net0 name=eth0,bridge=vmbr0,ip=dhcp +[Service] +Type=simple +User=root +WorkingDirectory=/opt/pulse-worker +ExecStart=/usr/bin/node worker.js +Restart=always +RestartSec=10 -# Clone for additional workers -pct clone 1000 1001 --full --storage ceph-pool -pct clone 1000 1002 --full --storage ceph-pool -pct clone 1000 1003 --full --storage ceph-pool +[Install] +WantedBy=multi-user.target +EOF -# Start all workers -for i in {1000..1003}; do pct start $i; done +# Start service +systemctl daemon-reload +systemctl enable pulse-worker.service +systemctl start pulse-worker.service ``` ## Usage -### Creating a Workflow +### Quick Command Execution -1. Access the PULSE web interface at `http://your-server:8080` -2. Navigate to **Workflows** β†’ **Create New** -3. Define workflow steps using the visual editor or YAML syntax -4. Specify execution targets (specific nodes, groups, or all workers) -5. Add interactive prompts where user input is required -6. Save and activate the workflow +1. Access PULSE at `http://your-server:8080` +2. Navigate to **⚑ Quick Command** tab +3. Select a worker from the dropdown +4. Use **Templates** for pre-built commands or **History** for recent commands +5. Enter your command and click **Execute** +6. View results in the **Executions** tab -### Example Workflow -```yaml -name: "System Update and Reboot" -description: "Update all servers in the cluster with user confirmation" -steps: - - name: "Check Current Versions" - type: "execute" - targets: ["all"] - command: "apt list --upgradable" - - - name: "User Approval" - type: "prompt" - message: "Review available updates. Proceed with installation?" - options: ["Yes", "No", "Cancel"] - - - name: "Install Updates" - type: "execute" - targets: ["all"] - command: "apt-get update && apt-get upgrade -y" - condition: "prompt_response == 'Yes'" - - - name: "Reboot Confirmation" - type: "prompt" - message: "Updates complete. Reboot all servers?" - options: ["Yes", "No"] - - - name: "Rolling Reboot" - type: "execute" - targets: ["all"] - command: "reboot" - strategy: "rolling" - condition: "prompt_response == 'Yes'" -``` +**Built-in Command Templates:** +- System Info: `uname -a` +- Disk Usage: `df -h` +- Memory Usage: `free -h` +- CPU Info: `lscpu` +- Running Processes: `ps aux --sort=-%mem | head -20` +- Network Interfaces: `ip addr show` +- Docker Containers: `docker ps -a` +- System Logs: `tail -n 50 /var/log/syslog` -### Running a Workflow +### Worker Monitoring -1. Select a workflow from the dashboard -2. Click **Execute** -3. Monitor progress in real-time -4. Respond to interactive prompts as they appear -5. View detailed logs for each execution step +The **Workers** tab displays real-time metrics for each worker: +- System information (OS, architecture, CPU cores) +- Memory usage (used/total with percentage) +- Load averages (1m, 5m, 15m) +- System uptime +- Active tasks vs. maximum concurrent capacity -## Configuration +### Execution Management -### Server Configuration (`config.yml`) -```yaml -server: - host: "0.0.0.0" - port: 8080 - secret_key: "your-secret-key" +- **View Details**: Click any execution to see formatted logs with timestamps, status, and output +- **Re-run Command**: Click "Re-run" button in execution details to repeat a command +- **Download Logs**: Export execution data as JSON for auditing +- **Clear Completed**: Bulk delete finished executions +- **Auto-Cleanup**: Executions older than 30 days are automatically removed -database: - type: "postgresql" - host: "localhost" - port: 5432 - name: "pulse" +### Workflow Creation (Future Feature) -workers: - heartbeat_interval: 30 - timeout: 300 - max_concurrent_tasks: 10 - -security: - enable_authentication: true - require_approval: true -``` - -### Worker Configuration (`worker-config.yml`) -```yaml -worker: - name: "worker-01" - server_url: "http://pulse-server:8080" - api_key: "worker-api-key" - -resources: - max_cpu_percent: 80 - max_memory_mb: 1024 - -executor: - shell: "/bin/bash" - working_directory: "/tmp/pulse" - timeout: 3600 -``` +1. Navigate to **Workflows** β†’ **Create New** +2. Define workflow steps using JSON syntax +3. Specify target workers +4. Add interactive prompts where needed +5. Save and execute ## Features in Detail -### Interactive Workflows -- Pause execution to collect user input via web forms -- Display intermediate results for review -- Conditional branching based on user decisions -- Multi-choice prompts with validation +### Terminal Aesthetic +- Phosphor green (#00ff41) on black (#0a0a0a) color scheme +- CRT scanline animation effect +- Text glow and shadow effects +- ASCII box-drawing characters for borders +- Boot sequence animation on first load +- Hover effects with smooth transitions -### Mass Execution -- Run commands across all workers simultaneously -- Target specific node groups or individual servers -- Rolling execution for zero-downtime updates -- Parallel and sequential execution strategies +### Real-Time Communication +- WebSocket-based bidirectional communication +- Instant command result notifications +- Live worker status updates +- Terminal beep sounds for events +- Toast notifications with visual feedback -### Monitoring & Logging -- Real-time workflow execution dashboard -- Detailed per-step logging and output capture -- Historical execution records and analytics -- Alert notifications for failures or completion +### Execution Tracking +- Formatted log display (not raw JSON) +- Color-coded success/failure indicators +- Timestamp and duration for each step +- Scrollable output with syntax highlighting +- Persistent history with pagination +- Load More button for large execution lists ### Security -- Role-based access control (RBAC) +- Authelia SSO integration for user authentication - API key authentication for workers -- Workflow approval requirements -- Audit logging for all actions +- User session management +- Admin-only operations (worker deletion, workflow management) +- Audit logging for all executions +### Performance +- Automatic cleanup of old executions (configurable retention) +- Pagination for large execution lists (50 at a time) +- Efficient WebSocket connection pooling +- Worker heartbeat monitoring +- Database connection pooling + +## Configuration + +### Environment Variables + +**Server (.env):** +```bash +PORT=8080 # Server port +SECRET_KEY= # Session secret +DB_HOST=10.10.10.50 # MariaDB host +DB_PORT=3306 # MariaDB port +DB_NAME=pulse # Database name +DB_USER=pulse_user # Database user +DB_PASSWORD= # Database password +WORKER_API_KEY= # Worker authentication key +EXECUTION_RETENTION_DAYS=30 # Auto-cleanup retention (default: 30) +``` + +**Worker (.env):** +```bash +WORKER_NAME=pulse-worker-01 # Unique worker name +PULSE_SERVER=http://10.10.10.65:8080 # Server HTTP URL +PULSE_WS=ws://10.10.10.65:8080 # Server WebSocket URL +WORKER_API_KEY= # Must match server key +HEARTBEAT_INTERVAL=30 # Heartbeat seconds (default: 30) +MAX_CONCURRENT_TASKS=5 # Max parallel tasks (default: 5) +``` + +## Database Schema + +PULSE uses MariaDB with the following tables: + +- **users**: User accounts from Authelia SSO +- **workers**: Worker node registry with metadata +- **workflows**: Workflow definitions (JSON) +- **executions**: Execution history with logs + +See [Claude.md](Claude.md) for complete schema details. + +## Troubleshooting + +### Worker Not Connecting +```bash +# Check worker service status +systemctl status pulse-worker + +# Check worker logs +journalctl -u pulse-worker -n 50 -f + +# Verify API key matches server +grep WORKER_API_KEY /opt/pulse-worker/.env +``` + +### Commands Stuck in "Running" +- This was fixed in recent updates - restart the server: +```bash +systemctl restart pulse.service +``` + +### Clear All Executions +Use the database directly if needed: +```bash +mysql -h 10.10.10.50 -u pulse_user -p pulse +> DELETE FROM executions WHERE status IN ('completed', 'failed'); +``` + +## Development + +### Recent Updates + +**Phase 1-6 Improvements:** +- Formatted log display with color-coding +- Worker system metrics monitoring +- Command templates and history +- Re-run and download execution features +- Auto-cleanup and pagination +- Terminal aesthetic refinements +- Audio notifications and visual toasts + +See git history for detailed changelog. + +### Future Enhancements +- Full workflow system implementation +- Multi-worker command execution +- Scheduled/cron job support +- Execution search and filtering +- Dark/light theme toggle +- Mobile-responsive design +- REST API documentation +- Webhook integrations + +## License + +MIT License - See LICENSE file for details --- -**PULSE** - Orchestrating your infrastructure, one heartbeat at a time. \ No newline at end of file +**PULSE** - Orchestrating your infrastructure, one heartbeat at a time. ⚑ + +Built with retro terminal aesthetics πŸ–₯️ | Powered by WebSockets πŸ”Œ | Secured by Authelia πŸ” diff --git a/public/index.html b/public/index.html index 5783a07..ec120e5 100644 --- a/public/index.html +++ b/public/index.html @@ -826,10 +826,36 @@ - - + +
+ + +
+ +
+ + +
+ + @@ -919,14 +945,28 @@ try { const response = await fetch('/api/workers'); workers = await response.json(); - - // Update worker select in quick command + + // Update worker select in quick command (single mode) const select = document.getElementById('quickWorkerSelect'); if (select) { - select.innerHTML = workers.map(w => + select.innerHTML = workers.map(w => `` ).join(''); } + + // Update worker checkboxes (multi mode) + const checkboxList = document.getElementById('workerCheckboxList'); + if (checkboxList) { + checkboxList.innerHTML = workers.length === 0 ? + '
No workers available
' : + workers.map(w => ` + + `).join(''); + } // Dashboard view const dashHtml = workers.length === 0 ? @@ -1416,6 +1456,38 @@ localStorage.setItem('commandHistory', JSON.stringify(history)); } + function toggleWorkerSelection() { + const mode = document.querySelector('input[name="execMode"]:checked').value; + const singleMode = document.getElementById('singleWorkerMode'); + const multiMode = document.getElementById('multiWorkerMode'); + + if (mode === 'single') { + singleMode.style.display = 'block'; + multiMode.style.display = 'none'; + } else { + singleMode.style.display = 'none'; + multiMode.style.display = 'block'; + } + } + + function selectAllWorkers() { + document.querySelectorAll('input[name="workerCheckbox"]').forEach(cb => { + cb.checked = true; + }); + } + + function selectOnlineWorkers() { + document.querySelectorAll('input[name="workerCheckbox"]').forEach(cb => { + cb.checked = cb.getAttribute('data-status') === 'online'; + }); + } + + function deselectAllWorkers() { + document.querySelectorAll('input[name="workerCheckbox"]').forEach(cb => { + cb.checked = false; + }); + } + async function deleteWorker(workerId, name) { if (!confirm(`Delete worker: ${name}?`)) return; @@ -1495,51 +1567,149 @@ } async function executeQuickCommand() { - const workerId = document.getElementById('quickWorkerSelect').value; const command = document.getElementById('quickCommand').value; + const execMode = document.querySelector('input[name="execMode"]:checked').value; - if (!workerId || !command) { - alert('Please select a worker and enter a command'); + if (!command) { + alert('Please enter a command'); return; } - // Find worker name for history - const worker = workers.find(w => w.id === workerId); - const workerName = worker ? worker.name : 'Unknown'; - const resultDiv = document.getElementById('quickCommandResult'); - resultDiv.innerHTML = '
Executing command...
'; - try { - const response = await fetch(`/api/workers/${workerId}/command`, { - method: 'POST', - headers: { 'Content-Type': 'application/json' }, - body: JSON.stringify({ command }) - }); + if (execMode === 'single') { + // Single worker execution + const workerId = document.getElementById('quickWorkerSelect').value; - if (response.ok) { - const data = await response.json(); - - // Add to command history - addToCommandHistory(command, workerName); - - resultDiv.innerHTML = ` -
- βœ“ Command sent successfully! -
- Execution ID: ${data.execution_id} -
-
- Check the Executions tab to see the results -
-
- `; - } else { - resultDiv.innerHTML = '
Failed to execute command
'; + if (!workerId) { + alert('Please select a worker'); + return; + } + + const worker = workers.find(w => w.id === workerId); + const workerName = worker ? worker.name : 'Unknown'; + + resultDiv.innerHTML = '
Executing command...
'; + + try { + const response = await fetch(`/api/workers/${workerId}/command`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ command }) + }); + + if (response.ok) { + const data = await response.json(); + addToCommandHistory(command, workerName); + + resultDiv.innerHTML = ` +
+ βœ“ Command sent successfully! +
+ Execution ID: ${data.execution_id} +
+
+ Check the Executions tab to see the results +
+
+ `; + terminalBeep('success'); + } else { + resultDiv.innerHTML = '
Failed to execute command
'; + terminalBeep('error'); + } + } catch (error) { + console.error('Error executing command:', error); + resultDiv.innerHTML = '
Error: ' + error.message + '
'; + terminalBeep('error'); + } + } else { + // Multi-worker execution + const selectedCheckboxes = document.querySelectorAll('input[name="workerCheckbox"]:checked'); + const selectedWorkerIds = Array.from(selectedCheckboxes).map(cb => cb.value); + + if (selectedWorkerIds.length === 0) { + alert('Please select at least one worker'); + return; + } + + resultDiv.innerHTML = `
Executing command on ${selectedWorkerIds.length} worker(s)...
`; + + const results = []; + let successCount = 0; + let failCount = 0; + + for (const workerId of selectedWorkerIds) { + try { + const worker = workers.find(w => w.id === workerId); + const response = await fetch(`/api/workers/${workerId}/command`, { + method: 'POST', + headers: { 'Content-Type': 'application/json' }, + body: JSON.stringify({ command }) + }); + + if (response.ok) { + const data = await response.json(); + results.push({ + worker: worker.name, + success: true, + executionId: data.execution_id + }); + successCount++; + } else { + results.push({ + worker: worker.name, + success: false, + error: 'Failed to execute' + }); + failCount++; + } + } catch (error) { + const worker = workers.find(w => w.id === workerId); + results.push({ + worker: worker ? worker.name : workerId, + success: false, + error: error.message + }); + failCount++; + } + } + + // Add to history with multi-worker notation + addToCommandHistory(command, `${selectedWorkerIds.length} workers`); + + // Display results summary + resultDiv.innerHTML = ` +
+ Multi-Worker Execution Complete +
+ βœ“ Success: ${successCount} | + βœ— Failed: ${failCount} +
+
+ ${results.map(r => ` +
+ ${r.worker}: + ${r.success ? + `βœ“ Sent (ID: ${r.executionId.substring(0, 8)}...)` : + `βœ— ${r.error}` + } +
+ `).join('')} +
+
+ Check the Executions tab to see detailed results +
+
+ `; + + if (failCount === 0) { + terminalBeep('success'); + } else if (successCount > 0) { + terminalBeep('info'); + } else { + terminalBeep('error'); } - } catch (error) { - console.error('Error executing command:', error); - resultDiv.innerHTML = '
Error: ' + error.message + '
'; } }