Compare commits

...

30 Commits

Author SHA1 Message Date
e7707d7edb Add abort execution feature for stuck/running processes 2026-01-08 22:11:59 -05:00
ea7a2d82e6 Fix execution details endpoint - remove reference to deleted activeExecutions 2026-01-08 22:06:25 -05:00
8224f3b6a4 Add missing helper functions for workflow execution (addExecutionLog, updateExecutionStatus) 2026-01-08 22:03:00 -05:00
9b31b7619f Add WebSocket error handling with stack traces
Wrapped ws.onmessage in try-catch to capture full stack trace
when errors occur during message handling. This will help identify
where the 'Cannot read properties of undefined' error is coming from.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:33:07 -05:00
06eb2d2593 Add detailed logging to workflow creation endpoint
This will help diagnose the 'Cannot read properties of undefined' error
by logging each step of the workflow creation process.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:30:50 -05:00
aaeb59a8e2 Improve error handling in workflow creation and data loading
- Separated JSON validation from API call error handling
- Changed refreshData() to async with individual try-catch blocks
- Better error messages: "Invalid JSON" vs "Error creating workflow"
- Console.error logging for each data loading function
- Changed success alert to terminal notification
- This will help identify which specific function is failing

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:27:54 -05:00
b85bd58c4b Fix: Remove duplicate old workflow execution code
Removed old/obsolete workflow execution system that was conflicting
with the new executeWorkflowSteps() engine:

Removed:
- activeExecutions Map (old tracking system)
- executeWorkflow() - old workflow executor
- executeNextStep() - old step processor
- executeCommandStep() - old command executor (duplicate)
- handleUserInput() - unimplemented prompt handler
- Duplicate app.post('/api/executions') endpoint
- app.post('/api/executions/:id/respond') endpoint

This was causing "Cannot read properties of undefined (reading 'target')"
error because the old code was being called instead of the new engine.

The new executeWorkflowSteps() engine is now the only workflow system.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:24:42 -05:00
018752813a Implement Complete Workflow Execution Engine
Added full workflow execution engine that actually runs workflow steps:

Server-Side (server.js):
- executeWorkflowSteps() - Main workflow orchestration function
- executeCommandStep() - Executes commands on target workers
- waitForCommandResult() - Polls for command completion
- Support for step types: execute, wait, prompt (prompt skipped for now)
- Sequential step execution with failure handling
- Worker targeting: "all" or specific worker IDs/names
- Automatic status updates (running -> completed/failed)
- Real-time WebSocket broadcasts for step progress
- Command result tracking with command_id for workflows
- Only updates status for non-workflow quick commands

Client-Side (index.html):
- Enhanced formatLogEntry() with workflow-specific log types
- step_started - Shows step number and name with amber color
- step_completed - Shows completion with green checkmark
- waiting - Displays wait duration
- no_workers - Error when no workers available
- worker_offline - Warning for offline workers
- workflow_error - Critical workflow errors
- Better visual feedback for workflow progress

Workflow Definition Format:
{
  "steps": [
    {
      "name": "Step Name",
      "type": "execute",
      "targets": ["all"] or ["worker-name"],
      "command": "your command here"
    },
    {
      "type": "wait",
      "duration": 5
    }
  ]
}

Features:
- Executes steps sequentially
- Stops on first failure
- Supports multiple workers per step
- Real-time progress updates
- Comprehensive logging
- Terminal-themed workflow logs

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:19:12 -05:00
4511fac486 Phase 10: Command Scheduler
Added comprehensive command scheduling system:

Backend:
- New scheduled_commands database table
- Scheduler processor runs every minute
- Support for three schedule types: interval, hourly, daily
- calculateNextRun() function for intelligent scheduling
- API endpoints: GET, POST, PUT (toggle), DELETE
- Executions automatically created and tracked
- Enable/disable schedules without deleting

Frontend:
- New Scheduler tab in navigation
- Create Schedule modal with worker selection
- Dynamic schedule input based on type
- Schedule list showing status, next/last run times
- Enable/Disable toggle for each schedule
- Delete schedule functionality
- Terminal-themed scheduler UI
- Integration with existing worker and execution systems

Schedule Types:
- Interval: Every X minutes (e.g., 30 for every 30 min)
- Hourly: Every X hours (e.g., 2 for every 2 hours)
- Daily: At specific time (e.g., 03:00 for 3 AM daily)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:13:27 -05:00
4e17fdbf8c Phase 9: Execution Diff View
Added powerful execution comparison and diff view:

- Compare Mode toggle button in executions tab
- Multi-select up to 5 executions for comparison
- Visual selection indicators with checkmarks
- Comparison modal with summary table (status, duration, timestamps)
- Side-by-side output view for all selected executions
- Line-by-line diff analysis for 2-execution comparisons
- Highlights identical vs. different lines
- Shows identical/different line counts
- Color-coded diff (green for exec 1, amber for exec 2)
- Perfect for comparing same command across workers
- Terminal-themed comparison UI

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:08:55 -05:00
5c41afed85 Phase 8: Execution Search & Filtering
Added comprehensive search and filtering for execution history:

- Search bar to filter by command text, execution ID, or workflow name
- Status filter dropdown (All, Running, Completed, Failed, Waiting)
- Real-time client-side filtering as user types
- Filter statistics showing X of Y executions
- Clear Filters button to reset all filters
- Extracts command text from logs for quick command searches
- Maintains all executions in memory for instant filtering
- Terminal-themed filter UI matching existing aesthetic

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:06:43 -05:00
4baecc54d3 Phase 7: Multi-Worker Command Execution
Added ability to execute commands on multiple workers simultaneously:

- Added execution mode selector (Single/Multiple Workers)
- Multi-worker mode with checkbox list for worker selection
- Helper buttons: Select All, Online Only, Clear All
- Sequential execution across selected workers
- Results summary showing success/fail count per worker
- Updated command history to track multi-worker executions
- Terminal beep feedback based on overall success/failure
- Maintained backward compatibility with single worker mode

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 23:03:45 -05:00
661c83a578 Fix clearCompletedExecutions for new pagination API format
Changes:
- Handle new pagination response format (data.executions vs data)
- Request up to 1000 executions to ensure all are checked
- Track successful deletions count
- Use terminal notification instead of alert
- Better error handling for individual delete failures

Fixes regression from Phase 5 pagination changes.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:55:13 -05:00
c6e3e5704e Phase 6: Terminal aesthetic refinements and notifications
Changes:
- Added blinking terminal cursor animation
- Smooth hover effects for execution/worker/workflow items
- Hover animation: background highlight + border expand + slide
- Loading pulse animation for loading states
- Slide-in animation for log entries
- Terminal beep sound using Web Audio API (different tones for success/error)
- Real-time terminal notifications for command completion
- Toast-style notifications with green glow effects
- Auto-dismiss after 3 seconds with fade-out
- Visual and audio feedback for user actions

Sound features:
- 800Hz tone for success (higher pitch)
- 200Hz tone for errors (lower pitch)
- 440Hz tone for info (standard A note)
- 100ms duration, exponential fade-out
- Graceful fallback if Web Audio API not supported

Notification features:
- Fixed position top-right
- Terminal-themed styling with glow
- Color-coded: green for success, red for errors
- Icons: ✓ success, ✗ error, ℹ info
- Smooth animations (slide-in, fade-out)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:52:51 -05:00
9f972182b2 Phase 5: Auto-cleanup and pagination for executions
Changes:
Server-side:
- Added automatic cleanup of old executions (runs daily)
- Configurable retention period via EXECUTION_RETENTION_DAYS env var (default: 30 days)
- Cleanup runs on server startup and every 24 hours
- Only cleans completed/failed executions, keeps running ones
- Added pagination support to /api/executions endpoint
- Returns total count, limit, offset, and hasMore flag

Client-side:
- Implemented "Load More" button for execution pagination
- Loads 50 executions at a time
- Appends additional executions when "Load More" clicked
- Shows total execution count info
- Backward compatible with old API format

Benefits:
- Automatic database maintenance
- Prevents execution table from growing indefinitely
- Better performance with large execution histories
- User can browse all executions via pagination
- Configurable retention policy per deployment

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:50:39 -05:00
fff50f19da Phase 4: Execution detail enhancements with re-run and download
Changes:
- Added "Re-run Command" button to execution details modal
- Added "Download Logs" button to export execution data as JSON
- Re-run automatically switches to Quick Command tab and pre-fills form
- Download includes all execution metadata and logs
- Buttons only show for applicable execution types
- Terminal-themed button styling

Features:
- Re-run: Quickly repeat a previous command on same worker
- Download: Export execution logs for auditing/debugging
- JSON format includes: execution_id, status, timestamps, logs
- Filename includes execution ID and date for easy organization

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:49:20 -05:00
8152a827e6 Phase 3: Quick command enhancements with templates and history
Changes:
- Added command templates modal with 12 common system commands
- Added command history tracking (stored in localStorage)
- History saves last 50 commands with timestamp and worker name
- Template categories: system info, disk/memory, network, Docker, logs
- Click templates to auto-fill command field
- Click history items to reuse previous commands
- Terminal-themed modals with green/amber styling
- History persists across browser sessions

Templates included:
- System: uname, uptime, CPU info, processes
- Resources: df -h, free -h, memory usage
- Network: ip addr, active connections
- Docker: container list
- Logs: syslog tail, who is logged in, last logins

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:45:40 -05:00
bc3524e163 Phase 2: Enhanced worker status display with metadata
Changes:
- Show worker system metrics in dashboard and worker list
- Display CPU cores, memory usage, load average, uptime
- Added formatBytes() to display memory in human-readable format
- Added formatUptime() to show uptime as days/hours/minutes
- Added getTimeAgo() to show relative last-seen time
- Improved worker list with detailed metadata panel
- Show active tasks vs max concurrent tasks
- Terminal-themed styling for metadata display
- Amber labels for metadata fields

Benefits:
- See worker health at a glance
- Monitor resource usage (CPU, RAM, load)
- Track worker activity (active tasks)
- Better operational visibility

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:43:13 -05:00
f8ec651e73 Phase 1: Improve log display formatting
Changes:
- Added formatLogEntry() function to parse and format log entries
- Replaced raw JSON display with readable formatted logs
- Added specific formatting for command_sent and command_result logs
- Show timestamp, status, duration, stdout/stderr in organized layout
- Color-coded success (green) and failure (red) states
- Added scrollable output sections with max-height
- Syntax highlighting for command code blocks
- Terminal-themed styling with green/amber colors

Benefits:
- Much easier to read execution logs
- Clear visual distinction between sent/result logs
- Professional terminal aesthetic maintained
- Better UX for debugging command execution

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:41:29 -05:00
7656f4a151 Add execution cleanup functionality
Changes:
- Added DELETE /api/executions/:id endpoint
- Added "Clear Completed" button to Executions tab
- Deletes all completed and failed executions
- Broadcasts execution_deleted event to update all clients
- Shows count of deleted executions

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:36:51 -05:00
e13fe9d22f Fix worker ID mapping - use database ID for command routing
Problem:
- Workers generate random UUID on startup (runtime ID)
- Database stores workers with persistent IDs (database ID)
- UI sends commands using database ID
- Server couldn't find worker connection (stored by runtime ID)
- Result: 400 Bad Request "Worker not connected"

Solution:
- When worker connects, look up database ID by worker name
- Store WebSocket connection in Map using BOTH IDs:
  * Runtime ID (from worker_connect message)
  * Database ID (from database lookup by name)
- Commands from UI use database ID → finds correct WebSocket
- Cleanup both IDs when worker disconnects

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:29:23 -05:00
8bda9672d6 Fix worker command execution and execution status updates
Changes:
- Removed duplicate /api/executions/:id endpoint that didn't parse logs
- Added workers Map to track worker_id -> WebSocket connection
- Store worker connections when they send worker_connect message
- Send commands to specific worker instead of broadcasting to all clients
- Clean up workers Map when worker disconnects
- Update execution status to completed/failed when command results arrive
- Add proper error handling when worker is not connected

Fixes:
- execution.logs.forEach is not a function (logs now properly parsed)
- Commands stuck in "running" status (now update to completed/failed)
- Commands not reaching workers (now sent to specific worker WebSocket)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:23:02 -05:00
4974730dc8 Remove database migrations after direct schema fixes
Changes:
- Removed all migration code from server.js
- Database schema fixed directly via MySQL:
  * Dropped users.role column (SSO only)
  * Dropped users.password column (SSO only)
  * Added executions.started_by column
  * Added workflows.created_by column
  * All tables now match expected schema
- Server startup will be faster without migrations

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:11:07 -05:00
df581e85a8 Remove password column from users table
Changes:
- Drop password column from users table (SSO authentication only)
- PULSE uses Authelia SSO, not password-based authentication
- Fixes 500 error: Field 'password' doesn't have a default value

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:01:58 -05:00
e4574627f1 Add last_login column to users table migration
Changes:
- Add last_login TIMESTAMP column to existing users table
- Complete the users table migration with all required columns
- Fixes 500 error: Unknown column 'last_login' in 'INSERT INTO'

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:33:04 -05:00
1d994dc8d6 Add migration to update users table schema
Changes:
- Add display_name, email, and groups columns to existing users table
- Handle MariaDB lack of IF NOT EXISTS in ALTER TABLE
- Gracefully skip columns that already exist
- Fixes 500 error when authenticating users

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:28:47 -05:00
02ed71e3e0 Allow NULL workflow_id in executions table for quick commands
Changes:
- Modified executions table schema to allow NULL workflow_id
- Removed foreign key constraint that prevented NULL values
- Added migration to update existing table structure
- Quick commands can now be stored without a workflow reference

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:27:02 -05:00
cff058818e Fix quick command executions not appearing in execution tab
Changes:
- Create execution record in database when quick command is sent
- Store initial log entry with command details
- Broadcast execution_started event to update UI
- Display quick commands as "[Quick Command]" in execution list
- Fix worker communication to properly track all executions

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:24:11 -05:00
05c304f2ed Updated websocket handler 2026-01-07 20:20:18 -05:00
20cff59cee updates aesthetic 2026-01-07 20:12:16 -05:00
4 changed files with 3155 additions and 674 deletions

3
.gitignore vendored
View File

@@ -30,3 +30,6 @@ Thumbs.db
*.swp *.swp
*.swo *.swo
*~ *~
# Claude
Claude.md

475
README.md
View File

@@ -1,240 +1,375 @@
# PULSE - Pipelined Unified Logic & Server Engine # PULSE - Pipelined Unified Logic & Server Engine
A distributed workflow orchestration platform for managing and executing complex multi-step operations across server clusters through an intuitive web interface. A distributed workflow orchestration platform for managing and executing complex multi-step operations across server clusters through a retro terminal-themed web interface.
## Overview ## Overview
PULSE is a centralized workflow execution system designed to orchestrate operations across distributed infrastructure. It provides a powerful web-based interface for defining, managing, and executing workflows that can span multiple servers, require human interaction, and perform complex automation tasks at scale. PULSE is a centralized workflow execution system designed to orchestrate operations across distributed infrastructure. It provides a powerful web-based interface with a vintage CRT terminal aesthetic for defining, managing, and executing workflows that can span multiple servers, require human interaction, and perform complex automation tasks at scale.
### Key Features ### Key Features
- **Interactive Workflow Management**: Define and execute multi-step workflows with conditional logic, user prompts, and decision points - **🎨 Retro Terminal Interface**: Phosphor green CRT-style interface with scanlines, glow effects, and ASCII art
- **Distributed Execution**: Run commands and scripts across multiple worker nodes simultaneously - **⚡ Quick Command Execution**: Instantly execute commands on any worker with built-in templates and command history
- **High Availability Architecture**: Deploy redundant worker nodes in LXC containers with Ceph storage for fault tolerance - **📊 Real-Time Worker Monitoring**: Live system metrics including CPU, memory, load average, and active tasks
- **Web-Based Control Center**: Intuitive interface for workflow selection, monitoring, and interactive input - **🔄 Interactive Workflow Management**: Define and execute multi-step workflows with conditional logic and user prompts
- **Flexible Worker Pool**: Scale horizontally by adding worker nodes as needed - **🌐 Distributed Execution**: Run commands across multiple worker nodes simultaneously via WebSocket
- **Real-Time Monitoring**: Track workflow progress, view logs, and receive notifications - **📈 Execution Tracking**: Comprehensive logging with formatted output, re-run capabilities, and JSON export
- **🔐 SSO Authentication**: Seamless integration with Authelia for enterprise authentication
- **🧹 Auto-Cleanup**: Automatic removal of old executions with configurable retention policies
- **🔔 Terminal Notifications**: Audio beeps and visual toasts for command completion events
## Architecture ## Architecture
PULSE consists of two core components: PULSE consists of two core components:
### PULSE Server ### PULSE Server
**Location:** `10.10.10.65` (LXC Container ID: 122)
**Directory:** `/opt/pulse-server`
The central orchestration hub that: The central orchestration hub that:
- Hosts the web interface for workflow management - Hosts the retro terminal web interface
- Manages workflow definitions and execution state - Manages workflow definitions and execution state
- Coordinates task distribution to worker nodes - Coordinates task distribution to worker nodes via WebSocket
- Handles user interactions and input collection - Handles user interactions through Authelia SSO
- Provides real-time status updates and logging - Provides real-time status updates and logging
- Stores all data in MariaDB database
**Technology Stack:**
- Node.js 20.x
- Express.js (web framework)
- WebSocket (ws package) for real-time bidirectional communication
- MySQL2 (MariaDB driver)
- Authelia SSO integration
### PULSE Worker ### PULSE Worker
**Example:** `10.10.10.151` (LXC Container ID: 153, hostname: pulse-worker-01)
**Directory:** `/opt/pulse-worker`
Lightweight execution agents that: Lightweight execution agents that:
- Connect to the PULSE server and await task assignments - Connect to PULSE server via WebSocket with heartbeat monitoring
- Execute commands, scripts, and code on target infrastructure - Execute shell commands and report results in real-time
- Report execution status and results back to the server - Provide system metrics (CPU, memory, load, uptime)
- Support multiple concurrent workflow executions - Support concurrent task execution with configurable limits
- Automatically reconnect and resume on failure - Automatically reconnect on connection loss
**Technology Stack:**
- Node.js 20.x
- WebSocket client
- Child process execution
- System metrics collection
``` ```
┌─────────────────────┐ ┌─────────────────────────────────
│ PULSE Server │ PULSE Server (10.10.10.65)
(Web Interface) Terminal Web Interface + API
──────────────────── │ ┌───────────┐ ┌──────────┐ │
│ MariaDB │ │ Authelia │
┌──────┴───────┬──────────────┬──────────────┐ │ Database │ │ SSO │ │
│ │ │ │ └───────────┘ └──────────┘
───────┐ ┌───────┐ ┌───────┐ ┌─────── ────────────────────────────────
│ Worker │ │ Worker │ │ Worker │ │ Worker │ │ WebSocket
│ Node 1 │ │ Node 2 │ │ Node 3 │ │ Node N │ ┌────────┴────────┬───────────┐
└────────┘ └────────┘ └────────┘ └────────┘ │ │ │
┌───▼────────┐ ┌───▼────┐ ┌──▼─────┐
│ Worker 1 │ │Worker 2│ │Worker N│
│10.10.10.151│ │ ... │ │ ... │
└────────────┘ └────────┘ └────────┘
LXC Containers in Proxmox with Ceph LXC Containers in Proxmox with Ceph
``` ```
## Deployment ## Installation
### Prerequisites ### Prerequisites
- **Proxmox VE Cluster**: Hypervisor environment for container deployment - **Node.js 20.x** or higher
- **Ceph Storage**: Distributed storage backend for high availability - **MariaDB 10.x** or higher
- **LXC Support**: Container runtime for worker node deployment - **Authelia** configured for SSO (optional but recommended)
- **Network Connectivity**: Communication between server and workers - **Network Connectivity** between server and workers
### Installation ### PULSE Server Setup
#### PULSE Server
```bash ```bash
# Clone the repository # Clone repository
git clone https://github.com/yourusername/pulse.git cd /opt
cd pulse git clone <your-repo-url> pulse-server
cd pulse-server
# Install dependencies # Install dependencies
npm install # or pip install -r requirements.txt npm install
# Configure server settings # Create .env file with configuration
cp config.example.yml config.yml cat > .env << EOF
nano config.yml # Server Configuration
PORT=8080
SECRET_KEY=your-secret-key-here
# Start the PULSE server # MariaDB Configuration
npm start # or python server.py DB_HOST=10.10.10.50
DB_PORT=3306
DB_NAME=pulse
DB_USER=pulse_user
DB_PASSWORD=your-db-password
# Worker API Key (for worker authentication)
WORKER_API_KEY=your-worker-api-key
# Auto-cleanup configuration (optional)
EXECUTION_RETENTION_DAYS=30
EOF
# Create systemd service
cat > /etc/systemd/system/pulse.service << EOF
[Unit]
Description=PULSE Workflow Orchestration Server
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/opt/pulse-server
ExecStart=/usr/bin/node server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
# Start service
systemctl daemon-reload
systemctl enable pulse.service
systemctl start pulse.service
``` ```
#### PULSE Worker ### PULSE Worker Setup
```bash ```bash
# On each worker node (LXC container) # On each worker node
cd /opt
git clone <your-repo-url> pulse-worker
cd pulse-worker cd pulse-worker
# Install dependencies # Install dependencies
npm install # or pip install -r requirements.txt npm install
# Configure worker connection # Create .env file
cp worker-config.example.yml worker-config.yml cat > .env << EOF
nano worker-config.yml # Worker Configuration
WORKER_NAME=pulse-worker-01
PULSE_SERVER=http://10.10.10.65:8080
PULSE_WS=ws://10.10.10.65:8080
WORKER_API_KEY=your-worker-api-key
# Start the worker daemon # Performance Settings
npm start # or python worker.py HEARTBEAT_INTERVAL=30
``` MAX_CONCURRENT_TASKS=5
EOF
### High Availability Setup # Create systemd service
cat > /etc/systemd/system/pulse-worker.service << EOF
[Unit]
Description=PULSE Worker Node
After=network.target
Deploy multiple worker nodes across Proxmox hosts: [Service]
```bash Type=simple
# Create LXC template User=root
pct create 1000 local:vztmpl/ubuntu-22.04-standard_amd64.tar.zst \ WorkingDirectory=/opt/pulse-worker
--rootfs ceph-pool:8 \ ExecStart=/usr/bin/node worker.js
--memory 2048 \ Restart=always
--cores 2 \ RestartSec=10
--net0 name=eth0,bridge=vmbr0,ip=dhcp
# Clone for additional workers [Install]
pct clone 1000 1001 --full --storage ceph-pool WantedBy=multi-user.target
pct clone 1000 1002 --full --storage ceph-pool EOF
pct clone 1000 1003 --full --storage ceph-pool
# Start all workers # Start service
for i in {1000..1003}; do pct start $i; done systemctl daemon-reload
systemctl enable pulse-worker.service
systemctl start pulse-worker.service
``` ```
## Usage ## Usage
### Creating a Workflow ### Quick Command Execution
1. Access the PULSE web interface at `http://your-server:8080` 1. Access PULSE at `http://your-server:8080`
2. Navigate to **Workflows****Create New** 2. Navigate to **⚡ Quick Command** tab
3. Define workflow steps using the visual editor or YAML syntax 3. Select a worker from the dropdown
4. Specify execution targets (specific nodes, groups, or all workers) 4. Use **Templates** for pre-built commands or **History** for recent commands
5. Add interactive prompts where user input is required 5. Enter your command and click **Execute**
6. Save and activate the workflow 6. View results in the **Executions** tab
### Example Workflow **Built-in Command Templates:**
```yaml - System Info: `uname -a`
name: "System Update and Reboot" - Disk Usage: `df -h`
description: "Update all servers in the cluster with user confirmation" - Memory Usage: `free -h`
steps: - CPU Info: `lscpu`
- name: "Check Current Versions" - Running Processes: `ps aux --sort=-%mem | head -20`
type: "execute" - Network Interfaces: `ip addr show`
targets: ["all"] - Docker Containers: `docker ps -a`
command: "apt list --upgradable" - System Logs: `tail -n 50 /var/log/syslog`
- name: "User Approval" ### Worker Monitoring
type: "prompt"
message: "Review available updates. Proceed with installation?"
options: ["Yes", "No", "Cancel"]
- name: "Install Updates" The **Workers** tab displays real-time metrics for each worker:
type: "execute" - System information (OS, architecture, CPU cores)
targets: ["all"] - Memory usage (used/total with percentage)
command: "apt-get update && apt-get upgrade -y" - Load averages (1m, 5m, 15m)
condition: "prompt_response == 'Yes'" - System uptime
- Active tasks vs. maximum concurrent capacity
- name: "Reboot Confirmation" ### Execution Management
type: "prompt"
message: "Updates complete. Reboot all servers?"
options: ["Yes", "No"]
- name: "Rolling Reboot" - **View Details**: Click any execution to see formatted logs with timestamps, status, and output
type: "execute" - **Re-run Command**: Click "Re-run" button in execution details to repeat a command
targets: ["all"] - **Download Logs**: Export execution data as JSON for auditing
command: "reboot" - **Clear Completed**: Bulk delete finished executions
strategy: "rolling" - **Auto-Cleanup**: Executions older than 30 days are automatically removed
condition: "prompt_response == 'Yes'"
```
### Running a Workflow ### Workflow Creation (Future Feature)
1. Select a workflow from the dashboard 1. Navigate to **Workflows****Create New**
2. Click **Execute** 2. Define workflow steps using JSON syntax
3. Monitor progress in real-time 3. Specify target workers
4. Respond to interactive prompts as they appear 4. Add interactive prompts where needed
5. View detailed logs for each execution step 5. Save and execute
## Configuration
### Server Configuration (`config.yml`)
```yaml
server:
host: "0.0.0.0"
port: 8080
secret_key: "your-secret-key"
database:
type: "postgresql"
host: "localhost"
port: 5432
name: "pulse"
workers:
heartbeat_interval: 30
timeout: 300
max_concurrent_tasks: 10
security:
enable_authentication: true
require_approval: true
```
### Worker Configuration (`worker-config.yml`)
```yaml
worker:
name: "worker-01"
server_url: "http://pulse-server:8080"
api_key: "worker-api-key"
resources:
max_cpu_percent: 80
max_memory_mb: 1024
executor:
shell: "/bin/bash"
working_directory: "/tmp/pulse"
timeout: 3600
```
## Features in Detail ## Features in Detail
### Interactive Workflows ### Terminal Aesthetic
- Pause execution to collect user input via web forms - Phosphor green (#00ff41) on black (#0a0a0a) color scheme
- Display intermediate results for review - CRT scanline animation effect
- Conditional branching based on user decisions - Text glow and shadow effects
- Multi-choice prompts with validation - ASCII box-drawing characters for borders
- Boot sequence animation on first load
- Hover effects with smooth transitions
### Mass Execution ### Real-Time Communication
- Run commands across all workers simultaneously - WebSocket-based bidirectional communication
- Target specific node groups or individual servers - Instant command result notifications
- Rolling execution for zero-downtime updates - Live worker status updates
- Parallel and sequential execution strategies - Terminal beep sounds for events
- Toast notifications with visual feedback
### Monitoring & Logging ### Execution Tracking
- Real-time workflow execution dashboard - Formatted log display (not raw JSON)
- Detailed per-step logging and output capture - Color-coded success/failure indicators
- Historical execution records and analytics - Timestamp and duration for each step
- Alert notifications for failures or completion - Scrollable output with syntax highlighting
- Persistent history with pagination
- Load More button for large execution lists
### Security ### Security
- Role-based access control (RBAC) - Authelia SSO integration for user authentication
- API key authentication for workers - API key authentication for workers
- Workflow approval requirements - User session management
- Audit logging for all actions - Admin-only operations (worker deletion, workflow management)
- Audit logging for all executions
### Performance
- Automatic cleanup of old executions (configurable retention)
- Pagination for large execution lists (50 at a time)
- Efficient WebSocket connection pooling
- Worker heartbeat monitoring
- Database connection pooling
## Configuration
### Environment Variables
**Server (.env):**
```bash
PORT=8080 # Server port
SECRET_KEY=<random-string> # Session secret
DB_HOST=10.10.10.50 # MariaDB host
DB_PORT=3306 # MariaDB port
DB_NAME=pulse # Database name
DB_USER=pulse_user # Database user
DB_PASSWORD=<password> # Database password
WORKER_API_KEY=<api-key> # Worker authentication key
EXECUTION_RETENTION_DAYS=30 # Auto-cleanup retention (default: 30)
```
**Worker (.env):**
```bash
WORKER_NAME=pulse-worker-01 # Unique worker name
PULSE_SERVER=http://10.10.10.65:8080 # Server HTTP URL
PULSE_WS=ws://10.10.10.65:8080 # Server WebSocket URL
WORKER_API_KEY=<api-key> # Must match server key
HEARTBEAT_INTERVAL=30 # Heartbeat seconds (default: 30)
MAX_CONCURRENT_TASKS=5 # Max parallel tasks (default: 5)
```
## Database Schema
PULSE uses MariaDB with the following tables:
- **users**: User accounts from Authelia SSO
- **workers**: Worker node registry with metadata
- **workflows**: Workflow definitions (JSON)
- **executions**: Execution history with logs
See [Claude.md](Claude.md) for complete schema details.
## Troubleshooting
### Worker Not Connecting
```bash
# Check worker service status
systemctl status pulse-worker
# Check worker logs
journalctl -u pulse-worker -n 50 -f
# Verify API key matches server
grep WORKER_API_KEY /opt/pulse-worker/.env
```
### Commands Stuck in "Running"
- This was fixed in recent updates - restart the server:
```bash
systemctl restart pulse.service
```
### Clear All Executions
Use the database directly if needed:
```bash
mysql -h 10.10.10.50 -u pulse_user -p pulse
> DELETE FROM executions WHERE status IN ('completed', 'failed');
```
## Development
### Recent Updates
**Phase 1-6 Improvements:**
- Formatted log display with color-coding
- Worker system metrics monitoring
- Command templates and history
- Re-run and download execution features
- Auto-cleanup and pagination
- Terminal aesthetic refinements
- Audio notifications and visual toasts
See git history for detailed changelog.
### Future Enhancements
- Full workflow system implementation
- Multi-worker command execution
- Scheduled/cron job support
- Execution search and filtering
- Dark/light theme toggle
- Mobile-responsive design
- REST API documentation
- Webhook integrations
## License
MIT License - See LICENSE file for details
--- ---
**PULSE** - Orchestrating your infrastructure, one heartbeat at a time. **PULSE** - Orchestrating your infrastructure, one heartbeat at a time.
Built with retro terminal aesthetics 🖥️ | Powered by WebSockets 🔌 | Secured by Authelia 🔐

File diff suppressed because it is too large Load Diff

1025
server.js

File diff suppressed because it is too large Load Diff