Compare commits

..

12 Commits

Author SHA1 Message Date
1f5c84f327 Add execution cleanup functionality
Changes:
- Added DELETE /api/executions/:id endpoint
- Added "Clear Completed" button to Executions tab
- Deletes all completed and failed executions
- Broadcasts execution_deleted event to update all clients
- Shows count of deleted executions

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:36:51 -05:00
e03f8d6287 Fix worker ID mapping - use database ID for command routing
Problem:
- Workers generate random UUID on startup (runtime ID)
- Database stores workers with persistent IDs (database ID)
- UI sends commands using database ID
- Server couldn't find worker connection (stored by runtime ID)
- Result: 400 Bad Request "Worker not connected"

Solution:
- When worker connects, look up database ID by worker name
- Store WebSocket connection in Map using BOTH IDs:
  * Runtime ID (from worker_connect message)
  * Database ID (from database lookup by name)
- Commands from UI use database ID → finds correct WebSocket
- Cleanup both IDs when worker disconnects

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:29:23 -05:00
2097b73404 Fix worker command execution and execution status updates
Changes:
- Removed duplicate /api/executions/:id endpoint that didn't parse logs
- Added workers Map to track worker_id -> WebSocket connection
- Store worker connections when they send worker_connect message
- Send commands to specific worker instead of broadcasting to all clients
- Clean up workers Map when worker disconnects
- Update execution status to completed/failed when command results arrive
- Add proper error handling when worker is not connected

Fixes:
- execution.logs.forEach is not a function (logs now properly parsed)
- Commands stuck in "running" status (now update to completed/failed)
- Commands not reaching workers (now sent to specific worker WebSocket)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:23:02 -05:00
6d15e4d240 Remove database migrations after direct schema fixes
Changes:
- Removed all migration code from server.js
- Database schema fixed directly via MySQL:
  * Dropped users.role column (SSO only)
  * Dropped users.password column (SSO only)
  * Added executions.started_by column
  * Added workflows.created_by column
  * All tables now match expected schema
- Server startup will be faster without migrations

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:11:07 -05:00
7896b40d91 Remove password column from users table
Changes:
- Drop password column from users table (SSO authentication only)
- PULSE uses Authelia SSO, not password-based authentication
- Fixes 500 error: Field 'password' doesn't have a default value

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 22:01:58 -05:00
e2dc371bfe Add last_login column to users table migration
Changes:
- Add last_login TIMESTAMP column to existing users table
- Complete the users table migration with all required columns
- Fixes 500 error: Unknown column 'last_login' in 'INSERT INTO'

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:33:04 -05:00
df0184facf Add migration to update users table schema
Changes:
- Add display_name, email, and groups columns to existing users table
- Handle MariaDB lack of IF NOT EXISTS in ALTER TABLE
- Gracefully skip columns that already exist
- Fixes 500 error when authenticating users

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:28:47 -05:00
a8be111e04 Allow NULL workflow_id in executions table for quick commands
Changes:
- Modified executions table schema to allow NULL workflow_id
- Removed foreign key constraint that prevented NULL values
- Added migration to update existing table structure
- Quick commands can now be stored without a workflow reference

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:27:02 -05:00
b3806545bd Fix quick command executions not appearing in execution tab
Changes:
- Create execution record in database when quick command is sent
- Store initial log entry with command details
- Broadcast execution_started event to update UI
- Display quick commands as "[Quick Command]" in execution list
- Fix worker communication to properly track all executions

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-07 20:24:11 -05:00
2767087e27 Updated websocket handler 2026-01-07 20:20:18 -05:00
a1cf8ac90b updates aesthetic 2026-01-07 20:12:16 -05:00
9e842624e1 Claude md file 2026-01-07 19:57:16 -05:00
8 changed files with 2358 additions and 4110 deletions

1372
Claude.md Normal file

File diff suppressed because it is too large Load Diff

509
README.md
View File

@@ -1,415 +1,240 @@
# PULSE - Pipelined Unified Logic & Server Engine
A distributed workflow orchestration platform for managing and executing complex multi-step operations across server clusters through a retro terminal-themed web interface.
> **Security Notice:** This repository is hosted on Gitea and is version-controlled. **Never commit secrets, credentials, passwords, API keys, or any sensitive information to this repo.** All sensitive configuration belongs exclusively in `.env` files which are listed in `.gitignore` and must never be committed. This includes database passwords, worker API keys, webhook secrets, and internal IP details.
**Design System**: [web_template](https://code.lotusguild.org/LotusGuild/web_template) — shared CSS, JS, and layout patterns for all LotusGuild apps
## Styling & Layout
PULSE uses the **LotusGuild Terminal Design System**. For all styling, component, and layout documentation see:
- [`web_template/README.md`](https://code.lotusguild.org/LotusGuild/web_template/src/branch/main/README.md) — full component reference, CSS variables, JS API
- [`web_template/base.css`](https://code.lotusguild.org/LotusGuild/web_template/src/branch/main/base.css) — unified CSS (`.lt-*` classes)
- [`web_template/base.js`](https://code.lotusguild.org/LotusGuild/web_template/src/branch/main/base.js) — `window.lt` utilities (toast, modal, WebSocket helpers, fetch)
- [`web_template/aesthetic_diff.md`](https://code.lotusguild.org/LotusGuild/web_template/src/branch/main/aesthetic_diff.md) — cross-app divergence analysis and convergence guide
- [`web_template/node/middleware.js`](https://code.lotusguild.org/LotusGuild/web_template/src/branch/main/node/middleware.js) — Express auth, CSRF, CSP nonce middleware
**Pending convergence items (see aesthetic_diff.md):**
- Extract inline `<style>` from `public/index.html` into `public/style.css` and extend `base.css`
- Use `lt.autoRefresh.start(refreshData, 30000)` instead of raw `setInterval`
A distributed workflow orchestration platform for managing and executing complex multi-step operations across server clusters through an intuitive web interface.
## Overview
PULSE is a centralized workflow execution system designed to orchestrate operations across distributed infrastructure. It provides a powerful web-based interface with a vintage CRT terminal aesthetic for defining, managing, and executing workflows that can span multiple servers, require human interaction, and perform complex automation tasks at scale.
PULSE is a centralized workflow execution system designed to orchestrate operations across distributed infrastructure. It provides a powerful web-based interface for defining, managing, and executing workflows that can span multiple servers, require human interaction, and perform complex automation tasks at scale.
### Key Features
- **🎨 Retro Terminal Interface**: Phosphor green CRT-style interface with scanlines, glow effects, and ASCII art
- **⚡ Quick Command Execution**: Instantly execute commands on any worker with built-in templates and command history
- **📊 Real-Time Worker Monitoring**: Live system metrics including CPU, memory, load average, and active tasks
- **🔄 Interactive Workflow Management**: Define and execute multi-step workflows with conditional logic and user prompts
- **🌐 Distributed Execution**: Run commands across multiple worker nodes simultaneously via WebSocket
- **📈 Execution Tracking**: Comprehensive logging with formatted output, re-run capabilities, and JSON export
- **🔐 SSO Authentication**: Seamless integration with Authelia for enterprise authentication
- **🧹 Auto-Cleanup**: Automatic removal of old executions with configurable retention policies
- **🔔 Terminal Notifications**: Audio beeps and visual toasts for command completion events
- **Interactive Workflow Management**: Define and execute multi-step workflows with conditional logic, user prompts, and decision points
- **Distributed Execution**: Run commands and scripts across multiple worker nodes simultaneously
- **High Availability Architecture**: Deploy redundant worker nodes in LXC containers with Ceph storage for fault tolerance
- **Web-Based Control Center**: Intuitive interface for workflow selection, monitoring, and interactive input
- **Flexible Worker Pool**: Scale horizontally by adding worker nodes as needed
- **Real-Time Monitoring**: Track workflow progress, view logs, and receive notifications
## Architecture
PULSE consists of two core components:
### PULSE Server
**Location:** `10.10.10.65` (LXC Container ID: 122)
**Directory:** `/opt/pulse-server`
The central orchestration hub that:
- Hosts the retro terminal web interface
- Hosts the web interface for workflow management
- Manages workflow definitions and execution state
- Coordinates task distribution to worker nodes via WebSocket
- Handles user interactions through Authelia SSO
- Coordinates task distribution to worker nodes
- Handles user interactions and input collection
- Provides real-time status updates and logging
- Stores all data in MariaDB database
**Technology Stack:**
- Node.js 20.x
- Express.js (web framework)
- WebSocket (ws package) for real-time bidirectional communication
- MySQL2 (MariaDB driver)
- Authelia SSO integration
### PULSE Worker
**Example:** `10.10.10.151` (LXC Container ID: 153, hostname: pulse-worker-01)
**Directory:** `/opt/pulse-worker`
Lightweight execution agents that:
- Connect to PULSE server via WebSocket with heartbeat monitoring
- Execute shell commands and report results in real-time
- Provide system metrics (CPU, memory, load, uptime)
- Support concurrent task execution with configurable limits
- Automatically reconnect on connection loss
**Technology Stack:**
- Node.js 20.x
- WebSocket client
- Child process execution
- System metrics collection
- Connect to the PULSE server and await task assignments
- Execute commands, scripts, and code on target infrastructure
- Report execution status and results back to the server
- Support multiple concurrent workflow executions
- Automatically reconnect and resume on failure
```
┌─────────────────────────────────
│ PULSE Server (10.10.10.65)
Terminal Web Interface + API
│ ┌───────────┐ ┌──────────┐ │
│ MariaDB │ │ Authelia │
│ Database │ │ SSO │ │
│ └───────────┘ └──────────┘
────────────────────────────────┘
│ WebSocket
┌────────┴────────┬───────────┐
│ │ │
┌───▼────────┐ ┌───▼────┐ ┌──▼─────┐
│ Worker 1 │ │Worker 2│ │Worker N│
│10.10.10.151│ │ ... │ │ ... │
└────────────┘ └────────┘ └────────┘
LXC Containers in Proxmox with Ceph
┌─────────────────────┐
│ PULSE Server
(Web Interface)
────────────────────
┌──────┴───────┬──────────────┬──────────────┐
│ │ │
───────┐ ┌───────┐ ┌───────┐ ┌───────
│ Worker │ │ Worker │ │ Worker │ │ Worker │
│ Node 1 │ │ Node 2 │ │ Node 3 │ │ Node N │
└────────┘ └────────┘ └────────┘ └────────┘
LXC Containers in Proxmox with Ceph
```
## Installation
## Deployment
### Prerequisites
- **Node.js 20.x** or higher
- **MariaDB 10.x** or higher
- **Authelia** configured for SSO (optional but recommended)
- **Network Connectivity** between server and workers
- **Proxmox VE Cluster**: Hypervisor environment for container deployment
- **Ceph Storage**: Distributed storage backend for high availability
- **LXC Support**: Container runtime for worker node deployment
- **Network Connectivity**: Communication between server and workers
### PULSE Server Setup
### Installation
#### PULSE Server
```bash
# Clone repository
cd /opt
git clone <your-repo-url> pulse-server
cd pulse-server
# Clone the repository
git clone https://github.com/yourusername/pulse.git
cd pulse
# Install dependencies
npm install
npm install # or pip install -r requirements.txt
# Create .env file with configuration
cat > .env << EOF
# Server Configuration
PORT=8080
SECRET_KEY=your-secret-key-here
# Configure server settings
cp config.example.yml config.yml
nano config.yml
# MariaDB Configuration
DB_HOST=10.10.10.50
DB_PORT=3306
DB_NAME=pulse
DB_USER=pulse_user
DB_PASSWORD=your-db-password
# Worker API Key (for worker authentication)
WORKER_API_KEY=your-worker-api-key
# Auto-cleanup configuration (optional)
EXECUTION_RETENTION_DAYS=30
EOF
# Create systemd service
cat > /etc/systemd/system/pulse.service << EOF
[Unit]
Description=PULSE Workflow Orchestration Server
After=network.target
[Service]
Type=simple
User=root
WorkingDirectory=/opt/pulse-server
ExecStart=/usr/bin/node server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
EOF
# Start service
systemctl daemon-reload
systemctl enable pulse.service
systemctl start pulse.service
# Start the PULSE server
npm start # or python server.py
```
### PULSE Worker Setup
#### PULSE Worker
```bash
# On each worker node
cd /opt
git clone <your-repo-url> pulse-worker
# On each worker node (LXC container)
cd pulse-worker
# Install dependencies
npm install
npm install # or pip install -r requirements.txt
# Create .env file
cat > .env << EOF
# Worker Configuration
WORKER_NAME=pulse-worker-01
PULSE_SERVER=http://10.10.10.65:8080
PULSE_WS=ws://10.10.10.65:8080
WORKER_API_KEY=your-worker-api-key
# Configure worker connection
cp worker-config.example.yml worker-config.yml
nano worker-config.yml
# Performance Settings
HEARTBEAT_INTERVAL=30
MAX_CONCURRENT_TASKS=5
EOF
# Start the worker daemon
npm start # or python worker.py
```
# Create systemd service
cat > /etc/systemd/system/pulse-worker.service << EOF
[Unit]
Description=PULSE Worker Node
After=network.target
### High Availability Setup
[Service]
Type=simple
User=root
WorkingDirectory=/opt/pulse-worker
ExecStart=/usr/bin/node worker.js
Restart=always
RestartSec=10
Deploy multiple worker nodes across Proxmox hosts:
```bash
# Create LXC template
pct create 1000 local:vztmpl/ubuntu-22.04-standard_amd64.tar.zst \
--rootfs ceph-pool:8 \
--memory 2048 \
--cores 2 \
--net0 name=eth0,bridge=vmbr0,ip=dhcp
[Install]
WantedBy=multi-user.target
EOF
# Clone for additional workers
pct clone 1000 1001 --full --storage ceph-pool
pct clone 1000 1002 --full --storage ceph-pool
pct clone 1000 1003 --full --storage ceph-pool
# Start service
systemctl daemon-reload
systemctl enable pulse-worker.service
systemctl start pulse-worker.service
# Start all workers
for i in {1000..1003}; do pct start $i; done
```
## Usage
### Quick Command Execution
### Creating a Workflow
1. Access PULSE at `http://your-server:8080`
2. Navigate to **⚡ Quick Command** tab
3. Select a worker from the dropdown
4. Use **Templates** for pre-built commands or **History** for recent commands
5. Enter your command and click **Execute**
6. View results in the **Executions** tab
1. Access the PULSE web interface at `http://your-server:8080`
2. Navigate to **Workflows****Create New**
3. Define workflow steps using the visual editor or YAML syntax
4. Specify execution targets (specific nodes, groups, or all workers)
5. Add interactive prompts where user input is required
6. Save and activate the workflow
**Built-in Command Templates:**
- System Info: `uname -a`
- Disk Usage: `df -h`
- Memory Usage: `free -h`
- CPU Info: `lscpu`
- Running Processes: `ps aux --sort=-%mem | head -20`
- Network Interfaces: `ip addr show`
- Docker Containers: `docker ps -a`
- System Logs: `tail -n 50 /var/log/syslog`
### Example Workflow
```yaml
name: "System Update and Reboot"
description: "Update all servers in the cluster with user confirmation"
steps:
- name: "Check Current Versions"
type: "execute"
targets: ["all"]
command: "apt list --upgradable"
### Worker Monitoring
- name: "User Approval"
type: "prompt"
message: "Review available updates. Proceed with installation?"
options: ["Yes", "No", "Cancel"]
The **Workers** tab displays real-time metrics for each worker:
- System information (OS, architecture, CPU cores)
- Memory usage (used/total with percentage)
- Load averages (1m, 5m, 15m)
- System uptime
- Active tasks vs. maximum concurrent capacity
- name: "Install Updates"
type: "execute"
targets: ["all"]
command: "apt-get update && apt-get upgrade -y"
condition: "prompt_response == 'Yes'"
### Execution Management
- name: "Reboot Confirmation"
type: "prompt"
message: "Updates complete. Reboot all servers?"
options: ["Yes", "No"]
- **View Details**: Click any execution to see formatted logs with timestamps, status, and output
- **Re-run Command**: Click "Re-run" button in execution details to repeat a command
- **Download Logs**: Export execution data as JSON for auditing
- **Clear Completed**: Bulk delete finished executions
- **Auto-Cleanup**: Executions older than 30 days are automatically removed
- name: "Rolling Reboot"
type: "execute"
targets: ["all"]
command: "reboot"
strategy: "rolling"
condition: "prompt_response == 'Yes'"
```
### Workflow Creation (Future Feature)
### Running a Workflow
1. Navigate to **Workflows****Create New**
2. Define workflow steps using JSON syntax
3. Specify target workers
4. Add interactive prompts where needed
5. Save and execute
## Features in Detail
### Terminal Aesthetic
- Phosphor green (#00ff41) on black (#0a0a0a) color scheme
- CRT scanline animation effect
- Text glow and shadow effects
- ASCII box-drawing characters for borders
- Boot sequence animation on first load
- Hover effects with smooth transitions
### Real-Time Communication
- WebSocket-based bidirectional communication
- Instant command result notifications
- Live worker status updates
- Terminal beep sounds for events
- Toast notifications with visual feedback
### Execution Tracking
- Formatted log display (not raw JSON)
- Color-coded success/failure indicators
- Timestamp and duration for each step
- Scrollable output with syntax highlighting
- Persistent history with pagination
- Load More button for large execution lists
### Security
- Authelia SSO integration for user authentication
- API key authentication for workers
- User session management
- Admin-only operations (worker deletion, workflow management)
- Audit logging for all executions
### Performance
- Automatic cleanup of old executions (configurable retention)
- Pagination for large execution lists (50 at a time)
- Efficient WebSocket connection pooling
- Worker heartbeat monitoring
- Database connection pooling
1. Select a workflow from the dashboard
2. Click **Execute**
3. Monitor progress in real-time
4. Respond to interactive prompts as they appear
5. View detailed logs for each execution step
## Configuration
### Environment Variables
### Server Configuration (`config.yml`)
```yaml
server:
host: "0.0.0.0"
port: 8080
secret_key: "your-secret-key"
**Server (.env):**
```bash
PORT=8080 # Server port
SECRET_KEY=<random-string> # Session secret
DB_HOST=10.10.10.50 # MariaDB host
DB_PORT=3306 # MariaDB port
DB_NAME=pulse # Database name
DB_USER=pulse_user # Database user
DB_PASSWORD=<password> # Database password
WORKER_API_KEY=<api-key> # Worker authentication key
EXECUTION_RETENTION_DAYS=30 # Auto-cleanup retention (default: 30)
database:
type: "postgresql"
host: "localhost"
port: 5432
name: "pulse"
workers:
heartbeat_interval: 30
timeout: 300
max_concurrent_tasks: 10
security:
enable_authentication: true
require_approval: true
```
**Worker (.env):**
```bash
WORKER_NAME=pulse-worker-01 # Unique worker name
PULSE_SERVER=http://10.10.10.65:8080 # Server HTTP URL
PULSE_WS=ws://10.10.10.65:8080 # Server WebSocket URL
WORKER_API_KEY=<api-key> # Must match server key
HEARTBEAT_INTERVAL=30 # Heartbeat seconds (default: 30)
MAX_CONCURRENT_TASKS=5 # Max parallel tasks (default: 5)
### Worker Configuration (`worker-config.yml`)
```yaml
worker:
name: "worker-01"
server_url: "http://pulse-server:8080"
api_key: "worker-api-key"
resources:
max_cpu_percent: 80
max_memory_mb: 1024
executor:
shell: "/bin/bash"
working_directory: "/tmp/pulse"
timeout: 3600
```
## Database Schema
## Features in Detail
PULSE uses MariaDB with the following tables:
### Interactive Workflows
- Pause execution to collect user input via web forms
- Display intermediate results for review
- Conditional branching based on user decisions
- Multi-choice prompts with validation
| Table | Purpose |
|-------|---------|
| `users` | User accounts synced from Authelia SSO |
| `workers` | Worker node registry with connection metadata |
| `workflows` | Workflow definitions stored as JSON |
| `executions` | Execution history with logs, status, and timestamps |
### Mass Execution
- Run commands across all workers simultaneously
- Target specific node groups or individual servers
- Rolling execution for zero-downtime updates
- Parallel and sequential execution strategies
### `executions` Table Key Columns
### Monitoring & Logging
- Real-time workflow execution dashboard
- Detailed per-step logging and output capture
- Historical execution records and analytics
- Alert notifications for failures or completion
| Column | Description |
|--------|-------------|
| `id` | Auto-increment primary key |
| `worker_id` | Foreign key to workers |
| `command` | The command that was executed |
| `status` | `running`, `completed`, `failed` |
| `output` | Command output / log (JSON or text) |
| `created_at` | Execution start timestamp |
| `completed_at` | Execution end timestamp |
### Security
- Role-based access control (RBAC)
- API key authentication for workers
- Workflow approval requirements
- Audit logging for all actions
### `workers` Table Key Columns
| Column | Description |
|--------|-------------|
| `id` | Auto-increment primary key |
| `name` | Worker name (from `WORKER_NAME` env) |
| `last_seen` | Last heartbeat timestamp |
| `status` | `online`, `offline` |
| `metadata` | JSON blob of system info |
## Troubleshooting
### Worker Not Connecting
```bash
# Check worker service status
systemctl status pulse-worker
# Check worker logs
journalctl -u pulse-worker -n 50 -f
# Verify API key matches server
grep WORKER_API_KEY /opt/pulse-worker/.env
```
### Commands Stuck in "Running"
- This was fixed in recent updates - restart the server:
```bash
systemctl restart pulse.service
```
### Clear All Executions
Use the database directly if needed:
```bash
mysql -h 10.10.10.50 -u pulse_user -p pulse
> DELETE FROM executions WHERE status IN ('completed', 'failed');
```
## Development
### Recent Updates
**Phase 1-6 Improvements:**
- Formatted log display with color-coding
- Worker system metrics monitoring
- Command templates and history
- Re-run and download execution features
- Auto-cleanup and pagination
- Terminal aesthetic refinements
- Audio notifications and visual toasts
See git history for detailed changelog.
### Future Enhancements
- Full workflow system implementation
- Multi-worker command execution
- Scheduled/cron job support
- Execution search and filtering
- Dark/light theme toggle
- Mobile-responsive design
- REST API documentation
- Webhook integrations
## License
MIT License - See LICENSE file for details
---
**PULSE** - Orchestrating your infrastructure, one heartbeat at a time.
Built with retro terminal aesthetics 🖥️ | Powered by WebSockets 🔌 | Secured by Authelia 🔐
**PULSE** - Orchestrating your infrastructure, one heartbeat at a time.

222
package-lock.json generated
View File

@@ -9,10 +9,13 @@
"version": "1.0.0",
"license": "ISC",
"dependencies": {
"cron-parser": "^5.5.0",
"bcryptjs": "^3.0.3",
"body-parser": "^2.2.1",
"cors": "^2.8.5",
"dotenv": "^17.2.3",
"express": "^5.1.0",
"express-rate-limit": "^8.3.1",
"js-yaml": "^4.1.1",
"jsonwebtoken": "^9.0.2",
"mysql2": "^3.15.3",
"ws": "^8.18.3"
}
@@ -30,6 +33,12 @@
"node": ">= 0.6"
}
},
"node_modules/argparse": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/argparse/-/argparse-2.0.1.tgz",
"integrity": "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q==",
"license": "Python-2.0"
},
"node_modules/aws-ssl-profiles": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/aws-ssl-profiles/-/aws-ssl-profiles-1.1.2.tgz",
@@ -39,6 +48,15 @@
"node": ">= 6.0.0"
}
},
"node_modules/bcryptjs": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/bcryptjs/-/bcryptjs-3.0.3.tgz",
"integrity": "sha512-GlF5wPWnSa/X5LKM1o0wz0suXIINz1iHRLvTS+sLyi7XPbe5ycmYI3DlZqVGZZtDgl4DmasFg7gOB3JYbphV5g==",
"license": "BSD-3-Clause",
"bin": {
"bcrypt": "bin/bcrypt"
}
},
"node_modules/body-parser": {
"version": "2.2.1",
"resolved": "https://registry.npmjs.org/body-parser/-/body-parser-2.2.1.tgz",
@@ -63,6 +81,12 @@
"url": "https://opencollective.com/express"
}
},
"node_modules/buffer-equal-constant-time": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/buffer-equal-constant-time/-/buffer-equal-constant-time-1.0.1.tgz",
"integrity": "sha512-zRpUiDwd/xk6ADqPMATG8vc9VPrkck7T07OIx0gnjmJAnHnTVXNQG3vfvWNuiZIkwu9KrKdA1iJKfsfTVxE6NA==",
"license": "BSD-3-Clause"
},
"node_modules/bytes": {
"version": "3.1.2",
"resolved": "https://registry.npmjs.org/bytes/-/bytes-3.1.2.tgz",
@@ -141,15 +165,17 @@
"node": ">=6.6.0"
}
},
"node_modules/cron-parser": {
"version": "5.5.0",
"resolved": "https://registry.npmjs.org/cron-parser/-/cron-parser-5.5.0.tgz",
"integrity": "sha512-oML4lKUXxizYswqmxuOCpgFS8BNUJpIu6k/2HVHyaL8Ynnf3wdf9tkns0yRdJLSIjkJ+b0DXHMZEHGpMwjnPww==",
"node_modules/cors": {
"version": "2.8.5",
"resolved": "https://registry.npmjs.org/cors/-/cors-2.8.5.tgz",
"integrity": "sha512-KIHbLJqu73RGr/hnbrO9uBeixNGuvSQjul/jdFvS/KFSIH1hWVd1ng7zOHx+YrEfInLG7q4n6GHQ9cDtxv/P6g==",
"license": "MIT",
"dependencies": {
"luxon": "^3.7.1"
"object-assign": "^4",
"vary": "^1"
},
"engines": {
"node": ">=18"
"node": ">= 0.10"
}
},
"node_modules/debug": {
@@ -213,6 +239,15 @@
"node": ">= 0.4"
}
},
"node_modules/ecdsa-sig-formatter": {
"version": "1.0.11",
"resolved": "https://registry.npmjs.org/ecdsa-sig-formatter/-/ecdsa-sig-formatter-1.0.11.tgz",
"integrity": "sha512-nagl3RYrbNv6kQkeJIpt6NJZy8twLB/2vtz6yN9Z4vRKHN4/QZJIEbqohALSgwKdnksuY3k5Addp5lg8sVoVcQ==",
"license": "Apache-2.0",
"dependencies": {
"safe-buffer": "^5.0.1"
}
},
"node_modules/ee-first": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/ee-first/-/ee-first-1.1.1.tgz",
@@ -315,23 +350,6 @@
"url": "https://opencollective.com/express"
}
},
"node_modules/express-rate-limit": {
"version": "8.3.1",
"resolved": "https://registry.npmjs.org/express-rate-limit/-/express-rate-limit-8.3.1.tgz",
"integrity": "sha512-D1dKN+cmyPWuvB+G2SREQDzPY1agpBIcTa9sJxOPMCNeH3gwzhqJRDWCXW3gg0y//+LQ/8j52JbMROWyrKdMdw==",
"dependencies": {
"ip-address": "10.1.0"
},
"engines": {
"node": ">= 16"
},
"funding": {
"url": "https://github.com/sponsors/express-rate-limit"
},
"peerDependencies": {
"express": ">= 4.11"
}
},
"node_modules/finalhandler": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/finalhandler/-/finalhandler-2.1.0.tgz",
@@ -500,14 +518,6 @@
"integrity": "sha512-k/vGaX4/Yla3WzyMCvTQOXYeIHvqOKtnqBduzTHpzpQZzAskKMhZ2K+EnBiSM9zGSoIFeMpXKxa4dYeZIQqewQ==",
"license": "ISC"
},
"node_modules/ip-address": {
"version": "10.1.0",
"resolved": "https://registry.npmjs.org/ip-address/-/ip-address-10.1.0.tgz",
"integrity": "sha512-XXADHxXmvT9+CRxhXg56LJovE+bmWnEWB78LB83VZTprKTmaC5QfruXocxzTZ2Kl0DNwKuBdlIhjL8LeY8Sf8Q==",
"engines": {
"node": ">= 12"
}
},
"node_modules/ipaddr.js": {
"version": "1.9.1",
"resolved": "https://registry.npmjs.org/ipaddr.js/-/ipaddr.js-1.9.1.tgz",
@@ -529,6 +539,103 @@
"integrity": "sha512-Ks/IoX00TtClbGQr4TWXemAnktAQvYB7HzcCxDGqEZU6oCmb2INHuOoKxbtR+HFkmYWBKv/dOZtGRiAjDhj92g==",
"license": "MIT"
},
"node_modules/js-yaml": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/js-yaml/-/js-yaml-4.1.1.tgz",
"integrity": "sha512-qQKT4zQxXl8lLwBtHMWwaTcGfFOZviOJet3Oy/xmGk2gZH677CJM9EvtfdSkgWcATZhj/55JZ0rmy3myCT5lsA==",
"license": "MIT",
"dependencies": {
"argparse": "^2.0.1"
},
"bin": {
"js-yaml": "bin/js-yaml.js"
}
},
"node_modules/jsonwebtoken": {
"version": "9.0.2",
"resolved": "https://registry.npmjs.org/jsonwebtoken/-/jsonwebtoken-9.0.2.tgz",
"integrity": "sha512-PRp66vJ865SSqOlgqS8hujT5U4AOgMfhrwYIuIhfKaoSCZcirrmASQr8CX7cUg+RMih+hgznrjp99o+W4pJLHQ==",
"license": "MIT",
"dependencies": {
"jws": "^3.2.2",
"lodash.includes": "^4.3.0",
"lodash.isboolean": "^3.0.3",
"lodash.isinteger": "^4.0.4",
"lodash.isnumber": "^3.0.3",
"lodash.isplainobject": "^4.0.6",
"lodash.isstring": "^4.0.1",
"lodash.once": "^4.0.0",
"ms": "^2.1.1",
"semver": "^7.5.4"
},
"engines": {
"node": ">=12",
"npm": ">=6"
}
},
"node_modules/jwa": {
"version": "1.4.2",
"resolved": "https://registry.npmjs.org/jwa/-/jwa-1.4.2.tgz",
"integrity": "sha512-eeH5JO+21J78qMvTIDdBXidBd6nG2kZjg5Ohz/1fpa28Z4CcsWUzJ1ZZyFq/3z3N17aZy+ZuBoHljASbL1WfOw==",
"license": "MIT",
"dependencies": {
"buffer-equal-constant-time": "^1.0.1",
"ecdsa-sig-formatter": "1.0.11",
"safe-buffer": "^5.0.1"
}
},
"node_modules/jws": {
"version": "3.2.2",
"resolved": "https://registry.npmjs.org/jws/-/jws-3.2.2.tgz",
"integrity": "sha512-YHlZCB6lMTllWDtSPHz/ZXTsi8S00usEV6v1tjq8tOUZzw7DpSDWVXjXDre6ed1w/pd495ODpHZYSdkRTsa0HA==",
"license": "MIT",
"dependencies": {
"jwa": "^1.4.1",
"safe-buffer": "^5.0.1"
}
},
"node_modules/lodash.includes": {
"version": "4.3.0",
"resolved": "https://registry.npmjs.org/lodash.includes/-/lodash.includes-4.3.0.tgz",
"integrity": "sha512-W3Bx6mdkRTGtlJISOvVD/lbqjTlPPUDTMnlXZFnVwi9NKJ6tiAk6LVdlhZMm17VZisqhKcgzpO5Wz91PCt5b0w==",
"license": "MIT"
},
"node_modules/lodash.isboolean": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/lodash.isboolean/-/lodash.isboolean-3.0.3.tgz",
"integrity": "sha512-Bz5mupy2SVbPHURB98VAcw+aHh4vRV5IPNhILUCsOzRmsTmSQ17jIuqopAentWoehktxGd9e/hbIXq980/1QJg==",
"license": "MIT"
},
"node_modules/lodash.isinteger": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/lodash.isinteger/-/lodash.isinteger-4.0.4.tgz",
"integrity": "sha512-DBwtEWN2caHQ9/imiNeEA5ys1JoRtRfY3d7V9wkqtbycnAmTvRRmbHKDV4a0EYc678/dia0jrte4tjYwVBaZUA==",
"license": "MIT"
},
"node_modules/lodash.isnumber": {
"version": "3.0.3",
"resolved": "https://registry.npmjs.org/lodash.isnumber/-/lodash.isnumber-3.0.3.tgz",
"integrity": "sha512-QYqzpfwO3/CWf3XP+Z+tkQsfaLL/EnUlXWVkIk5FUPc4sBdTehEqZONuyRt2P67PXAk+NXmTBcc97zw9t1FQrw==",
"license": "MIT"
},
"node_modules/lodash.isplainobject": {
"version": "4.0.6",
"resolved": "https://registry.npmjs.org/lodash.isplainobject/-/lodash.isplainobject-4.0.6.tgz",
"integrity": "sha512-oSXzaWypCMHkPC3NvBEaPHf0KsA5mvPrOPgQWDsbg8n7orZ290M0BmC/jgRZ4vcJ6DTAhjrsSYgdsW/F+MFOBA==",
"license": "MIT"
},
"node_modules/lodash.isstring": {
"version": "4.0.1",
"resolved": "https://registry.npmjs.org/lodash.isstring/-/lodash.isstring-4.0.1.tgz",
"integrity": "sha512-0wJxfxH1wgO3GrbuP+dTTk7op+6L41QCXbGINEmD+ny/G/eCqGzxyCsh7159S+mgDDcoarnBw6PC1PS5+wUGgw==",
"license": "MIT"
},
"node_modules/lodash.once": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/lodash.once/-/lodash.once-4.1.1.tgz",
"integrity": "sha512-Sb487aTOCr9drQVL8pIxOzVhafOjZN9UU54hiN8PU3uAiSV7lx1yYNpbNmex2PK6dSJoNTSJUUswT651yww3Mg==",
"license": "MIT"
},
"node_modules/long": {
"version": "5.3.2",
"resolved": "https://registry.npmjs.org/long/-/long-5.3.2.tgz",
@@ -559,14 +666,6 @@
"url": "https://github.com/sponsors/wellwelwel"
}
},
"node_modules/luxon": {
"version": "3.7.2",
"resolved": "https://registry.npmjs.org/luxon/-/luxon-3.7.2.tgz",
"integrity": "sha512-vtEhXh/gNjI9Yg1u4jX/0YVPMvxzHuGgCm6tC5kZyb08yjGWGnqAjGJvcXbqQR2P3MyMEFnRbpcdFS6PBcLqew==",
"engines": {
"node": ">=12"
}
},
"node_modules/math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
@@ -669,6 +768,15 @@
"node": ">= 0.6"
}
},
"node_modules/object-assign": {
"version": "4.1.1",
"resolved": "https://registry.npmjs.org/object-assign/-/object-assign-4.1.1.tgz",
"integrity": "sha512-rJgTQnkUnH1sFw8yT6VSU3zD3sWmu6sZhIseY8VX+GRu3P6F7Fu+JNDoXfklElbLJSnc3FUQHVe4cU5hj+BcUg==",
"license": "MIT",
"engines": {
"node": ">=0.10.0"
}
},
"node_modules/object-inspect": {
"version": "1.13.4",
"resolved": "https://registry.npmjs.org/object-inspect/-/object-inspect-1.13.4.tgz",
@@ -789,12 +897,44 @@
"node": ">= 18"
}
},
"node_modules/safe-buffer": {
"version": "5.2.1",
"resolved": "https://registry.npmjs.org/safe-buffer/-/safe-buffer-5.2.1.tgz",
"integrity": "sha512-rp3So07KcdmmKbGvgaNxQSJr7bGVSVk5S9Eq1F+ppbRo70+YeaDxkw5Dd8NPN+GD6bjnYm2VuPuCXmpuYvmCXQ==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/feross"
},
{
"type": "patreon",
"url": "https://www.patreon.com/feross"
},
{
"type": "consulting",
"url": "https://feross.org/support"
}
],
"license": "MIT"
},
"node_modules/safer-buffer": {
"version": "2.1.2",
"resolved": "https://registry.npmjs.org/safer-buffer/-/safer-buffer-2.1.2.tgz",
"integrity": "sha512-YZo3K82SD7Riyi0E1EQPojLz7kpepnSQI9IyPbHHg1XXXevb5dJI7tpyN2ADxGcQbHG7vcyRHk0cbwqcQriUtg==",
"license": "MIT"
},
"node_modules/semver": {
"version": "7.7.3",
"resolved": "https://registry.npmjs.org/semver/-/semver-7.7.3.tgz",
"integrity": "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q==",
"license": "ISC",
"bin": {
"semver": "bin/semver.js"
},
"engines": {
"node": ">=10"
}
},
"node_modules/send": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/send/-/send-1.2.0.tgz",

View File

@@ -10,10 +10,13 @@
"license": "ISC",
"description": "",
"dependencies": {
"cron-parser": "^5.5.0",
"bcryptjs": "^3.0.3",
"body-parser": "^2.2.1",
"cors": "^2.8.5",
"dotenv": "^17.2.3",
"express": "^5.1.0",
"express-rate-limit": "^8.3.1",
"js-yaml": "^4.1.1",
"jsonwebtoken": "^9.0.2",
"mysql2": "^3.15.3",
"ws": "^8.18.3"
}

View File

@@ -1 +0,0 @@
/root/code/web_template/base.js

File diff suppressed because it is too large Load Diff

1782
server.js

File diff suppressed because it is too large Load Diff

View File

@@ -1,257 +0,0 @@
const axios = require('axios');
const WebSocket = require('ws');
const { exec } = require('child_process');
const { promisify } = require('util');
const os = require('os');
const crypto = require('crypto');
require('dotenv').config();
const execAsync = promisify(exec);
class PulseWorker {
constructor() {
this.workerId = crypto.randomUUID();
this.workerName = process.env.WORKER_NAME || os.hostname();
this.serverUrl = process.env.PULSE_SERVER || 'http://localhost:8080';
this.wsUrl = process.env.PULSE_WS || 'ws://localhost:8080';
this.apiKey = process.env.WORKER_API_KEY;
this.heartbeatInterval = parseInt(process.env.HEARTBEAT_INTERVAL || '30') * 1000;
this.maxConcurrentTasks = parseInt(process.env.MAX_CONCURRENT_TASKS || '5');
this.activeTasks = 0;
this.ws = null;
this.heartbeatTimer = null;
}
async start() {
console.log(`[PULSE Worker] Starting worker: ${this.workerName}`);
console.log(`[PULSE Worker] Worker ID: ${this.workerId}`);
console.log(`[PULSE Worker] Server: ${this.serverUrl}`);
// Send initial heartbeat
await this.sendHeartbeat();
// Start heartbeat timer
this.startHeartbeat();
// Connect to WebSocket for real-time commands
this.connectWebSocket();
console.log(`[PULSE Worker] Worker started successfully`);
}
startHeartbeat() {
this.heartbeatTimer = setInterval(async () => {
try {
await this.sendHeartbeat();
} catch (error) {
console.error('[PULSE Worker] Heartbeat failed:', error.message);
}
}, this.heartbeatInterval);
}
async sendHeartbeat() {
const metadata = {
hostname: os.hostname(),
platform: os.platform(),
arch: os.arch(),
cpus: os.cpus().length,
totalMem: os.totalmem(),
freeMem: os.freemem(),
uptime: os.uptime(),
loadavg: os.loadavg(),
activeTasks: this.activeTasks,
maxConcurrentTasks: this.maxConcurrentTasks
};
try {
const response = await axios.post(
`${this.serverUrl}/api/workers/heartbeat`,
{
worker_id: this.workerId,
name: this.workerName,
metadata: metadata
},
{
headers: {
'X-API-Key': this.apiKey,
'Content-Type': 'application/json'
}
}
);
console.log(`[PULSE Worker] Heartbeat sent - Status: online`);
return response.data;
} catch (error) {
console.error('[PULSE Worker] Heartbeat error:', error.message);
throw error;
}
}
connectWebSocket() {
console.log(`[PULSE Worker] Connecting to WebSocket...`);
this.ws = new WebSocket(this.wsUrl);
this.ws.on('open', () => {
console.log('[PULSE Worker] WebSocket connected');
// Identify this worker
this.ws.send(JSON.stringify({
type: 'worker_connect',
worker_id: this.workerId,
worker_name: this.workerName
}));
});
this.ws.on('message', async (data) => {
try {
const message = JSON.parse(data.toString());
await this.handleMessage(message);
} catch (error) {
console.error('[PULSE Worker] Message handling error:', error);
}
});
this.ws.on('close', () => {
console.log('[PULSE Worker] WebSocket disconnected, reconnecting...');
setTimeout(() => this.connectWebSocket(), 5000);
});
this.ws.on('error', (error) => {
console.error('[PULSE Worker] WebSocket error:', error.message);
});
}
async handleMessage(message) {
console.log(`[PULSE Worker] Received message:`, message.type);
switch (message.type) {
case 'execute_command':
await this.executeCommand(message);
break;
case 'execute_workflow':
await this.executeWorkflow(message);
break;
case 'ping':
this.sendPong();
break;
default:
console.log(`[PULSE Worker] Unknown message type: ${message.type}`);
}
}
async executeCommand(message) {
const { command, execution_id, command_id, timeout = 300000 } = message;
if (this.activeTasks >= this.maxConcurrentTasks) {
console.log(`[PULSE Worker] Max concurrent tasks reached, rejecting command`);
return;
}
this.activeTasks++;
console.log(`[PULSE Worker] Executing command (active tasks: ${this.activeTasks})`);
try {
const startTime = Date.now();
const { stdout, stderr } = await execAsync(command, {
timeout: timeout,
maxBuffer: 10 * 1024 * 1024 // 10MB buffer
});
const duration = Date.now() - startTime;
const result = {
type: 'command_result',
execution_id,
worker_id: this.workerId,
command_id,
success: true,
stdout: stdout,
stderr: stderr,
duration: duration,
timestamp: new Date().toISOString()
};
this.sendResult(result);
console.log(`[PULSE Worker] Command completed in ${duration}ms`);
} catch (error) {
const result = {
type: 'command_result',
execution_id,
worker_id: this.workerId,
command_id,
success: false,
error: error.message,
stdout: error.stdout || '',
stderr: error.stderr || '',
timestamp: new Date().toISOString()
};
this.sendResult(result);
console.error(`[PULSE Worker] Command failed:`, error.message);
} finally {
this.activeTasks--;
}
}
async executeWorkflow(message) {
const { workflow, execution_id } = message;
console.log(`[PULSE Worker] Executing workflow: ${workflow.name}`);
// Workflow execution will be implemented in phase 2
// For now, just acknowledge receipt
this.sendResult({
type: 'workflow_result',
execution_id,
worker_id: this.workerId,
success: true,
message: 'Workflow execution not yet implemented',
timestamp: new Date().toISOString()
});
}
sendResult(result) {
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify(result));
}
}
sendPong() {
if (this.ws && this.ws.readyState === WebSocket.OPEN) {
this.ws.send(JSON.stringify({ type: 'pong', worker_id: this.workerId }));
}
}
async stop() {
console.log('[PULSE Worker] Shutting down...');
if (this.heartbeatTimer) {
clearInterval(this.heartbeatTimer);
}
if (this.ws) {
this.ws.close();
}
console.log('[PULSE Worker] Shutdown complete');
}
}
// Start worker
const worker = new PulseWorker();
// Handle graceful shutdown
process.on('SIGTERM', async () => {
await worker.stop();
process.exit(0);
});
process.on('SIGINT', async () => {
await worker.stop();
process.exit(0);
});
// Start the worker
worker.start().catch((error) => {
console.error('[PULSE Worker] Fatal error:', error);
process.exit(1);
});