docs: clean up README — remove stale audit sections, update versions, add Draupnir

- Remove all verbose Improvement Audit sections 1–11 (already applied)
- Remove stale running services table with old uptime/memory numbers
- Update Synapse version 1.148.0 → 1.149.0
- Add Draupnir moderation bot to infrastructure table, key paths, and new Moderation section
- Document active ban lists (community-moderation-effort-bl, matrix-org-coc-bl)
- Mark federation bad-actor blocking , Draupnir deployment 

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
2026-03-10 19:43:27 -04:00
parent 210984f914
commit 18c4ea14d4

813
README.md
View File

@@ -29,20 +29,18 @@ Matrix bot and server infrastructure for the Lotus Guild homeserver (`matrix.lot
| Service | IP | LXC | RAM | vCPUs | Disk | Versions | | Service | IP | LXC | RAM | vCPUs | Disk | Versions |
|---------|----|-----|-----|-------|------|----------| |---------|----|-----|-----|-------|------|----------|
| Synapse | 10.10.10.29 | 151 | 8GB | 4 (Ryzen 9 7900) | 50GB (21% used) | Synapse 1.148.0, LiveKit 1.9.11, hookshot 7.3.2, coturn latest | | Synapse | 10.10.10.29 | 151 | 8GB | 4 (Ryzen 9 7900) | 50GB | Synapse 1.149.0, LiveKit 1.9.11, hookshot 7.3.2, coturn latest |
| PostgreSQL 17 | 10.10.10.44 | 109 | 6GB | 3 (Ryzen 9 7900) | 30GB (5% used) | PostgreSQL 17.9 | | PostgreSQL 17 | 10.10.10.44 | 109 | 6GB | 3 (Ryzen 9 7900) | 30GB | PostgreSQL 17.9 |
| Cinny Web | 10.10.10.6 | 106 | 256MB runtime | 1 | 8GB (27% used) | Debian 13, nginx, Node 24, Cinny 4.10.5 | | Cinny Web | 10.10.10.6 | 106 | 256MB | 1 | 8GB | Debian 13, nginx, Node 24, Cinny 4.10.5 |
| Draupnir | 10.10.10.24 | 110 | 1GB | 2 (Ryzen 9 7900) | 10GB | Draupnir v2.9.0, Node.js v22 |
| Prometheus | 10.10.10.48 | 118 | — | — | — | Prometheus — scrapes all Matrix services |
| Grafana | 10.10.10.49 | 107 | — | — | — | Grafana 12.4.0 — dashboard.lotusguild.org |
| NPM | 10.10.10.27 | 139 | — | — | — | Nginx Proxy Manager | | NPM | 10.10.10.27 | 139 | — | — | — | Nginx Proxy Manager |
| Authelia | 10.10.10.36 | 167 | — | — | — | SSO/OIDC provider | | Authelia | 10.10.10.36 | 167 | — | — | — | SSO/OIDC provider |
| LLDAP | 10.10.10.39 | 147 | — | — | — | LDAP user directory | | LLDAP | 10.10.10.39 | 147 | — | — | — | LDAP user directory |
| Uptime Kuma | 10.10.10.25 | 101 | — | — | — | Uptime monitoring (micro1 node) | | Uptime Kuma | 10.10.10.25 | 101 | — | — | — | Uptime monitoring (micro1 node) |
| Prometheus | 10.10.10.48 | 118 | — | — | — | Prometheus — scrapes all Matrix services |
| Grafana | 10.10.10.49 | 107 | — | — | — | Grafana 12.4.0 — dashboard.lotusguild.org |
| Draupnir | 10.10.10.24 | 110 | 1GB | 2 (Ryzen 9 7900) | 10GB | Draupnir v2.9.0, Node.js v22 |
> **Note:** PostgreSQL container IP is `10.10.10.44`, not `.2` — update any stale references. **Key paths on Synapse LXC (151):**
**Key paths on Synapse/matrix LXC (151):**
- Synapse config: `/etc/matrix-synapse/homeserver.yaml` - Synapse config: `/etc/matrix-synapse/homeserver.yaml`
- Synapse conf.d: `/etc/matrix-synapse/conf.d/` (metrics.yaml, report_stats.yaml, server_name.yaml) - Synapse conf.d: `/etc/matrix-synapse/conf.d/` (metrics.yaml, report_stats.yaml, server_name.yaml)
- coturn config: `/etc/turnserver.conf` - coturn config: `/etc/turnserver.conf`
@@ -52,53 +50,55 @@ Matrix bot and server infrastructure for the Lotus Guild homeserver (`matrix.lot
- Hookshot: `/opt/hookshot/`, service: `matrix-hookshot.service` - Hookshot: `/opt/hookshot/`, service: `matrix-hookshot.service`
- Hookshot config: `/opt/hookshot/config.yml` - Hookshot config: `/opt/hookshot/config.yml`
- Hookshot registration: `/etc/matrix-synapse/hookshot-registration.yaml` - Hookshot registration: `/etc/matrix-synapse/hookshot-registration.yaml`
- Landing page: `/var/www/matrix-landing/index.html` (on NPM LXC 139)
- Bot: `/opt/matrixbot/`, service: `matrixbot.service` - Bot: `/opt/matrixbot/`, service: `matrixbot.service`
- Landing page: `/var/www/matrix-landing/index.html` (on NPM LXC 139)
**Key paths on Draupnir LXC (110, 10.10.10.24):** **Key paths on Draupnir LXC (110):**
- Install path: `/opt/draupnir/` - Install path: `/opt/draupnir/`
- Config: `/opt/draupnir/config/production.yaml` - Config: `/opt/draupnir/config/production.yaml`
- Data/SQLite DBs: `/data/storage/` - Data/SQLite DBs: `/data/storage/`
- Service: `draupnir.service` - Service: `draupnir.service`
- Management room: `#management:matrix.lotusguild.org` (`!mEvR5fe3jMmzwd-FwNygD72OY_yu8H3UP_N-57oK7MI`) - Management room: `#management:matrix.lotusguild.org` (`!mEvR5fe3jMmzwd-FwNygD72OY_yu8H3UP_N-57oK7MI`)
- Bot account: `@draupnir:matrix.lotusguild.org` (power level 100 in management room) - Bot account: `@draupnir:matrix.lotusguild.org` (power level 100 in all protected rooms)
- Built from source: `NODE_OPTIONS="--max-old-space-size=768" npx tsc --project tsconfig.json` - Subscribed ban lists: `#community-moderation-effort-bl:neko.dev`, `#matrix-org-coc-bl:matrix.org`
- Rebuild: `NODE_OPTIONS="--max-old-space-size=768" npx tsc --project tsconfig.json`
**Key paths on PostgreSQL LXC (109):** **Key paths on PostgreSQL LXC (109):**
- PostgreSQL config: `/etc/postgresql/17/main/postgresql.conf` - PostgreSQL config: `/etc/postgresql/17/main/postgresql.conf`
- PostgreSQL conf.d: `/etc/postgresql/17/main/conf.d/` - Tuning conf.d: `/etc/postgresql/17/main/conf.d/synapse_tuning.conf`
- HBA config: `/etc/postgresql/17/main/pg_hba.conf` - HBA config: `/etc/postgresql/17/main/pg_hba.conf`
- Data directory: `/var/lib/postgresql/17/main` - Data directory: `/var/lib/postgresql/17/main`
**Running services on LXC 151:** **Key paths on Cinny LXC (106):**
| Service | PID status | Memory | Notes | - Source: `/opt/cinny/` (branch: `add-joined-call-controls`)
|---------|-----------|--------|-------| - Built files: `/var/www/html/`
| matrix-synapse | active, 2+ days | 231MB peak 312MB | No workers, single process | - Cinny config: `/var/www/html/config.json`
| livekit-server | active, 2+ days | 22MB peak 58MB | v1.9.11, node IP = 162.192.14.139 | - Config backup (survives rebuilds): `/opt/cinny-config.json`
| lk-jwt-service | active, 2+ days | 2.7MB | Binds :8070, LIVEKIT_URL=wss://matrix.lotusguild.org | - Nginx site config: `/etc/nginx/sites-available/cinny`
| matrix-hookshot | active, 2+ days | 76MB peak 172MB | Actively receiving webhooks | - Rebuild script: `/usr/local/bin/cinny-update`
| matrixbot | active, 2+ days | 26MB peak 59MB | Some E2EE key errors (see known issues) |
| coturn | active, 2+ days | 13MB | Periodic TCP reset errors (normal) |
**Currently Open Port forwarding (router → 10.10.10.29):** ---
- TCP+UDP 3478 (TURN/STUN signaling)
- TCP+UDP 5349 (TURNS/TLS) ## Port Maps
- TCP 7881 (LiveKit ICE TCP fallback)
- TCP+UDP 49152-65535 (TURN relay range) **Router → 10.10.10.29 (forwarded):**
- LiveKit WebRTC media: 50100-50500 (subset of above, only 400 ports — see improvements) - TCP+UDP 3478 — TURN/STUN
- TCP+UDP 5349 — TURNS/TLS
- TCP 7881 — LiveKit ICE TCP fallback
- TCP+UDP 49152-65535 — TURN relay range
**Internal port map (LXC 151):** **Internal port map (LXC 151):**
| Port | Service | Bind | | Port | Service | Bind |
|------|---------|------| |------|---------|------|
| 8008 | Synapse HTTP | 0.0.0.0 + ::1 | | 8008 | Synapse HTTP | 0.0.0.0 |
| 9000 | Synapse metrics (Prometheus) | 127.0.0.1 + 10.10.10.29 | | 9000 | Synapse metrics | 127.0.0.1 + 10.10.10.29 |
| 9001 | Hookshot widgets | 0.0.0.0 | | 9001 | Hookshot widgets | 0.0.0.0 |
| 9002 | Hookshot bridge (appservice) | 127.0.0.1 | | 9002 | Hookshot bridge (appservice) | 127.0.0.1 |
| 9003 | Hookshot webhooks | 0.0.0.0 | | 9003 | Hookshot webhooks | 0.0.0.0 |
| 9004 | Hookshot metrics (Prometheus) | 0.0.0.0 | | 9004 | Hookshot metrics | 0.0.0.0 |
| 9100 | node_exporter (Prometheus) | 0.0.0.0 | | 9100 | node_exporter | 0.0.0.0 |
| 9101 | matrix-admin exporter | 0.0.0.0 | | 9101 | matrix-admin exporter | 0.0.0.0 |
| 6789 | LiveKit metrics (Prometheus) | 0.0.0.0 | | 6789 | LiveKit metrics | 0.0.0.0 |
| 7880 | LiveKit HTTP | 0.0.0.0 | | 7880 | LiveKit HTTP | 0.0.0.0 |
| 7881 | LiveKit RTC TCP | 0.0.0.0 | | 7881 | LiveKit RTC TCP | 0.0.0.0 |
| 8070 | lk-jwt-service | 0.0.0.0 | | 8070 | lk-jwt-service | 0.0.0.0 |
@@ -110,7 +110,7 @@ Matrix bot and server infrastructure for the Lotus Guild homeserver (`matrix.lot
| Port | Service | Bind | | Port | Service | Bind |
|------|---------|------| |------|---------|------|
| 5432 | PostgreSQL | 0.0.0.0 (hba-restricted to 10.10.10.29) | | 5432 | PostgreSQL | 0.0.0.0 (hba-restricted to 10.10.10.29) |
| 9100 | node_exporter (Prometheus) | 0.0.0.0 | | 9100 | node_exporter | 0.0.0.0 |
| 9187 | postgres_exporter | 0.0.0.0 | | 9187 | postgres_exporter | 0.0.0.0 |
--- ---
@@ -138,7 +138,7 @@ Matrix bot and server infrastructure for the Lotus Guild homeserver (`matrix.lot
## Webhook Integrations (matrix-hookshot 7.3.2) ## Webhook Integrations (matrix-hookshot 7.3.2)
Generic webhooks bridged into **Spam and Stuff** via [matrix-hookshot](https://github.com/matrix-org/matrix-hookshot). Generic webhooks bridged into **Spam and Stuff**.
Each service gets its own virtual user (`@hookshot_<service>`) with a unique avatar. Each service gets its own virtual user (`@hookshot_<service>`) with a unique avatar.
Webhook URL format: `https://matrix.lotusguild.org/webhook/<uuid>` Webhook URL format: `https://matrix.lotusguild.org/webhook/<uuid>`
@@ -152,264 +152,44 @@ Webhook URL format: `https://matrix.lotusguild.org/webhook/<uuid>`
| Lidarr | `66ac6fdd-69f6-4f47-bb00-b7f6d84d7c1c` | All event types | | Lidarr | `66ac6fdd-69f6-4f47-bb00-b7f6d84d7c1c` | All event types |
| Uptime Kuma | `1a02e890-bb25-42f1-99fe-bba6a19f1811` | Status change notifications | | Uptime Kuma | `1a02e890-bb25-42f1-99fe-bba6a19f1811` | Status change notifications |
| Seerr | `555185af-90a1-42ff-aed5-c344e11955cf` | Request/approval events | | Seerr | `555185af-90a1-42ff-aed5-c344e11955cf` | Request/approval events |
| Owncast (Livestream) | `9993e911-c68b-4271-a178-c2d65ca88499` | STREAM_STARTED / STREAM_STOPPED (hookshot display name: "Livestream") | | Owncast (Livestream) | `9993e911-c68b-4271-a178-c2d65ca88499` | STREAM_STARTED / STREAM_STOPPED |
| Bazarr | `470fb267-3436-4dd3-a70c-e6e8db1721be` | Subtitle events (Apprise JSON notifier) | | Bazarr | `470fb267-3436-4dd3-a70c-e6e8db1721be` | Subtitle events (Apprise JSON notifier) |
| Tinker-Tickets | `6e306faf-8eea-4ba5-83ef-bf8f421f929e` | Custom transformation code | | Tinker-Tickets | `6e306faf-8eea-4ba5-83ef-bf8f421f929e` | Custom transformation code |
**Hookshot notes:** **Hookshot notes:**
- Spam and Stuff is intentionally **unencrypted** — hookshot bridges cannot join E2EE rooms - Spam and Stuff is intentionally **unencrypted** — hookshot bridges cannot join E2EE rooms
- Webhook tokens stored in Synapse PostgreSQL `room_account_data` for `@hookshot` - JS transformation functions use hookshot v2 API: `result = { version: "v2", plain, html, msgtype }`
- JS transformation functions use hookshot v2 API: set `result = { version: "v2", plain, html, msgtype }` - The `result` variable must be assigned without `var`/`let`/`const` (QuickJS IIFE sandbox)
- The `result` variable must be assigned without `var`/`let`/`const` (needs implicit global scope in the QuickJS IIFE sandbox)
- NPM proxies `https://matrix.lotusguild.org/webhook/*``http://10.10.10.29:9003` - NPM proxies `https://matrix.lotusguild.org/webhook/*``http://10.10.10.29:9003`
- Virtual user avatars: set via appservice token (`as_token` in hookshot-registration.yaml) impersonating each user
- Hookshot bridge port (9002) binds `127.0.0.1` only; webhook ingest (9003) binds `0.0.0.0` (NPM-proxied) ---
## Moderation (Draupnir v2.9.0)
Draupnir runs on LXC 110, manages moderation across all 9 protected rooms via `#management:matrix.lotusguild.org`.
**Subscribed ban lists:**
- `#community-moderation-effort-bl:neko.dev` — 12,599 banned users, 245 servers, 59 rooms
- `#matrix-org-coc-bl:matrix.org` — 4,589 banned users, 220 servers, 2 rooms
**Common commands (send in management room):**
```
!draupnir status — current status + protected rooms
!draupnir ban @user:server * "reason" — ban from all protected rooms
!draupnir redact @user:server — redact their recent messages
!draupnir rooms add !roomid:server — add a room to protection
!draupnir watch <alias> --no-confirm — subscribe to a ban list
```
--- ---
## Known Issues ## Known Issues
### coturn TLS Reset Errors ### coturn TLS Reset Errors
Periodic `TLS/TCP socket error: Connection reset by peer` in coturn logs from external IPs. This is normal — clients probe TURN and drop the connection once they establish a direct P2P path. Not an issue. Periodic `TLS/TCP socket error: Connection reset by peer` in coturn logs. Normal — clients probe TURN and drop once they establish a direct P2P path.
### BBR Congestion Control — Host-Level Only ### BBR Congestion Control
`net.ipv4.tcp_congestion_control = bbr` and `net.core.default_qdisc = fq` cannot be set from inside an unprivileged LXC container — they affect the host kernel's network namespace. These must be applied on the Proxmox host itself to take effect for all containers. All other sysctl tuning (TCP/UDP buffers, fin_timeout) applied successfully inside LXC 151. `net.ipv4.tcp_congestion_control = bbr` must be set on the Proxmox host, not inside an unprivileged LXC. All other sysctl tuning (TCP/UDP buffers, fin_timeout) is applied inside LXC 151.
---
## Optimizations & Improvements
### 1. LiveKit / Voice Quality ✅ Applied
Noise suppression and volume normalization are **client-side only** (browser/Element X handles this via WebRTC's built-in audio processing). The server cannot enforce these. Applied server-side improvements:
- **ICE port range expanded:** 50100-50500 (400 ports) → **50000-51000 (1001 ports)** = ~500 concurrent WebRTC streams
- **TURN TTL reduced:** 86400s (24h) → **3600s (1h)** — stale allocations expire faster
- **Room defaults added:** `empty_timeout: 300`, `departure_timeout: 20`, `max_participants: 50`
**Client-side audio advice for users:**
- **Element Web/Desktop:** Settings → Voice & Video → enable "Noise Suppression" and "Echo Cancellation"
- **Element X (mobile):** automatic via WebRTC stack
- **Cinny (chat.lotusguild.org):** voice via embedded Element Call widget — browser WebRTC noise suppression is active automatically
### 2. PostgreSQL Tuning (LXC 109) ✅ Applied
`/etc/postgresql/17/main/conf.d/synapse_tuning.conf` written and active. `pg_stat_statements` extension created in the `synapse` database. Config applied:
```ini
# Memory — shared_buffers = 25% RAM, effective_cache_size = 75% RAM
shared_buffers = 1500MB
effective_cache_size = 4500MB
work_mem = 32MB # Per sort/hash operation (safe at low connection count)
maintenance_work_mem = 256MB # VACUUM, CREATE INDEX
wal_buffers = 64MB # WAL write buffer
# Checkpointing
checkpoint_completion_target = 0.9 # Spread checkpoint I/O (default 0.5 is aggressive)
max_wal_size = 2GB
# Storage (Ceph RBD block device = SSD-equivalent random I/O)
random_page_cost = 1.1 # Default 4.0 assumes spinning disk
effective_io_concurrency = 200 # For SSDs/Ceph
# Parallel queries (3 vCPUs)
max_worker_processes = 3
max_parallel_workers_per_gather = 1
max_parallel_workers = 2
# Monitoring
shared_preload_libraries = 'pg_stat_statements'
pg_stat_statements.track = all
```
Restarted `postgresql@17-main`. Expected impact: Synapse query latency drops as the DB grows — the entire current 120MB database fits in shared_buffers.
### 3. PostgreSQL Security — pg_hba.conf (LXC 109) ✅ Applied
Removed the two open rules (`0.0.0.0/24 md5` and `0.0.0.0/0 md5`). Remote access is now restricted to Synapse LXC only:
```
host synapse synapse_user 10.10.10.29/32 scram-sha-256
```
All other remote connections are rejected. Local Unix socket and loopback remain functional for admin access.
### 4. Synapse Cache Tuning (LXC 151) ✅ Applied
`event_cache_size` bumped 15K → 30K. `_get_state_group_for_events: 3.0` added to `per_cache_factors` (heavily hit during E2EE key sharing). Synapse restarted cleanly.
```yaml
event_cache_size: 30K
caches:
global_factor: 2.0
per_cache_factors:
get_users_in_room: 3.0
get_current_state_ids: 3.0
_get_state_group_for_events: 3.0
```
### 5. Network / sysctl Tuning (LXC 151) ✅ Applied
`/etc/sysctl.d/99-matrix-tuning.conf` written and active. TCP/UDP buffers aligned and fin_timeout reduced.
```ini
# Align TCP buffers with core maximums
net.ipv4.tcp_rmem = 4096 131072 26214400
net.ipv4.tcp_wmem = 4096 65536 26214400
# UDP buffer sizing for WebRTC media streams
net.core.rmem_max = 26214400
net.core.wmem_max = 26214400
net.ipv4.udp_rmem_min = 65536
net.ipv4.udp_wmem_min = 65536
# Reduce latency for short-lived TURN connections
net.ipv4.tcp_fin_timeout = 10
net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_intvl = 30
```
> **BBR note:** `tcp_congestion_control = bbr` and `default_qdisc = fq` require host-level sysctl — cannot be set inside an unprivileged LXC. Apply on the Proxmox host to benefit all containers:
> ```bash
> echo "net.ipv4.tcp_congestion_control = bbr" >> /etc/sysctl.d/99-bbr.conf
> echo "net.core.default_qdisc = fq" >> /etc/sysctl.d/99-bbr.conf
> sysctl --system
> ```
### 6. Synapse Federation Hardening
The server is effectively a private server for friends. Restricting federation prevents abuse and reduces load. Add to `homeserver.yaml`:
```yaml
# Allow federation only with specific trusted servers (or disable entirely)
federation_domain_whitelist:
- matrix.org # Keep for bridging if needed
- matrix.lotusguild.org
# OR to go fully closed (recommended for friends-only):
# federation_enabled: false
```
### 7. Bot E2EE Key Fix (LXC 151) ✅ Applied
`nio_store/` cleared and bot restarted cleanly. Megolm session errors resolved.
---
## Custom Cinny Client (chat.lotusguild.org)
Cinny v4 is the preferred client — clean UI, Cinny-style rendering already used by the bot's Wordle tiles. We build from source to get voice support and full branding control.
### Why Cinny over Element Web
- Much cleaner aesthetics, already the de-facto client for guild members
- Element Web voice suppression (Krisp) is only on `app.element.io` — a custom build loses it
- Cinny `add-joined-call-controls` branch uses `@element-hq/element-call-embedded` which talks to the **existing** MatrixRTC → lk-jwt-service → LiveKit stack with zero new infrastructure
- Static build (nginx serving ~5MB of files) — nearly zero runtime resource cost
### Voice support status (as of March 2026)
The official `add-joined-call-controls` branch (maintained by `ajbura`, last commit March 8 2026) embeds Element Call as a widget via `@element-hq/element-call-embedded: 0.16.3`. This uses the same MatrixRTC protocol that lk-jwt-service already handles. Two direct LiveKit integration PRs (#2703, #2704) were proposed but closed without merge — so the embedded Element Call approach is the official path.
Since lk-jwt-service is already running on LXC 151 and configured for `wss://matrix.lotusguild.org`, voice calls will work out of the box once the Cinny build is deployed.
### LXC Setup
**Create the LXC** (run on the host):
```bash
# ProxmoxVE Debian 13 community script
bash -c "$(curl -fsSL https://raw.githubusercontent.com/community-scripts/ProxmoxVE/main/ct/debian.sh)"
```
Recommended settings: 2GB RAM, 1-2 vCPUs, 20GB disk, Debian 13, static IP on VLAN 10 (e.g. `10.10.10.XX`).
**Inside the new LXC:**
```bash
# Install nginx + git + nvm dependencies
apt update && apt install -y nginx git curl
# Install Node.js 24 via nvm
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.40.1/install.sh | bash
source ~/.bashrc
nvm install 24
nvm use 24
# Clone Cinny and switch to voice-support branch
git clone https://github.com/cinnyapp/cinny.git /opt/cinny
cd /opt/cinny
git checkout add-joined-call-controls
# Install dependencies and build
npm ci
NODE_OPTIONS=--max_old_space_size=4096 npm run build
# Output: /opt/cinny/dist/
# Deploy to nginx root
cp -r /opt/cinny/dist/* /var/www/html/
```
**Configure Cinny** — edit `/var/www/html/config.json`:
```json
{
"defaultHomeserver": 0,
"homeserverList": ["matrix.lotusguild.org"],
"allowCustomHomeservers": false,
"featuredCommunities": {
"openAsDefault": false,
"spaces": [],
"rooms": [],
"servers": []
},
"hashRouter": {
"enabled": false,
"basename": "/"
}
}
```
**Nginx config**`/etc/nginx/sites-available/cinny` (matches the official `docker-nginx.conf`):
```nginx
server {
listen 80;
listen [::]:80;
server_name chat.lotusguild.org;
root /var/www/html;
index index.html;
location / {
rewrite ^/config.json$ /config.json break;
rewrite ^/manifest.json$ /manifest.json break;
rewrite ^/sw.js$ /sw.js break;
rewrite ^/pdf.worker.min.js$ /pdf.worker.min.js break;
rewrite ^/public/(.*)$ /public/$1 break;
rewrite ^/assets/(.*)$ /assets/$1 break;
rewrite ^(.+)$ /index.html break;
}
}
```
```bash
ln -s /etc/nginx/sites-available/cinny /etc/nginx/sites-enabled/
nginx -t && systemctl reload nginx
```
Then in **NPM**: add a proxy host for `chat.lotusguild.org``http://10.10.10.XX:80` with SSL.
### Rebuilding after updates
```bash
cd /opt/cinny
git pull
npm ci
NODE_OPTIONS=--max_old_space_size=4096 npm run build
cp -r dist/* /var/www/html/
# Preserve your config.json — it gets overwritten by the copy above, so:
# Option: keep config.json outside dist and symlink/copy it in after each build
```
### Key paths (Cinny LXC 106 — 10.10.10.6)
- Source: `/opt/cinny/` (branch: `add-joined-call-controls`)
- Built files: `/var/www/html/`
- Cinny config: `/var/www/html/config.json`
- Config backup (survives rebuilds): `/opt/cinny-config.json`
- Nginx site config: `/etc/nginx/sites-available/cinny`
- Rebuild script: `/usr/local/bin/cinny-update`
--- ---
@@ -425,26 +205,23 @@ cp -r dist/* /var/www/html/
- [x] Sliding sync (native Synapse) - [x] Sliding sync (native Synapse)
- [x] LiveKit for Element Call video rooms - [x] LiveKit for Element Call video rooms
- [x] Default room version v12, all rooms upgraded - [x] Default room version v12, all rooms upgraded
- [x] Landing page with client recommendations (Cinny, Commet, Element, Element X mobile) - [x] Landing page with client recommendations
- [x] Synapse metrics endpoint (port 9000, Prometheus-compatible) - [x] Synapse metrics endpoint (port 9000, Prometheus-compatible)
- [ ] Push notifications gateway (Sygnal) for mobile clients - [x] Custom Cinny client LXC 106 — Cinny 4.10.5, `add-joined-call-controls` branch, weekly auto-update cron
- [x] LiveKit port range expanded to 50000-51000 for voice call capacity - [ ] Push notifications gateway (Sygnal) — needs Apple/Google developer credentials
- [x] Custom Cinny client LXC 106 (10.10.10.6) — Debian 13, Cinny 4.10.5 built from `add-joined-call-controls`, nginx serving, HA enabled - [ ] Cinny custom branding — Lotus Guild theme (colours, title, favicon, PWA name)
- [x] NPM proxy entry for `chat.lotusguild.org` → 10.10.10.6:80, SSL via Cloudflare DNS challenge, HTTPS forced, HTTP/2 + HSTS enabled
- [x] Cinny weekly auto-update cron (`/etc/cron.d/cinny-update`, Sundays 3am, logs to `/var/log/cinny-update.log`)
- [ ] Cinny custom branding — Lotus Guild theme (colors, title, favicon, PWA name)
### Performance Tuning ### Performance Tuning
- [x] PostgreSQL `shared_buffers` → 1500MB, `effective_cache_size`, `work_mem`, checkpoint tuning applied - [x] PostgreSQL `shared_buffers` → 1500MB, `effective_cache_size`, `work_mem`, checkpoint tuning
- [x] PostgreSQL `pg_stat_statements` extension installed in `synapse` database - [x] PostgreSQL `pg_stat_statements` extension installed
- [x] PostgreSQL autovacuum tuned per-table (`state_groups_state`, `events`, `receipts_linearized`, `receipts_graph`, `device_lists_stream`, `presence_stream`), `autovacuum_max_workers` → 5 - [x] PostgreSQL autovacuum tuned per-table (5 high-churn tables), `autovacuum_max_workers` → 5
- [x] Synapse `event_cache_size` → 30K, `_get_state_group_for_events` cache factor added - [x] Synapse `event_cache_size` → 30K, per-cache factors tuned
- [x] sysctl TCP/UDP buffer alignment applied to LXC 151 (`/etc/sysctl.d/99-matrix-tuning.conf`) - [x] sysctl TCP/UDP buffer alignment on LXC 151 (`/etc/sysctl.d/99-matrix-tuning.conf`)
- [x] LiveKit room `empty_timeout: 300`, `departure_timeout: 20`, `max_participants: 50` - [x] LiveKit: `empty_timeout: 300`, `departure_timeout: 20`, `max_participants: 50`
- [x] LiveKit ICE port range expanded to 50000-51000 - [x] LiveKit ICE port range expanded to 50000-51000
- [x] LiveKit TURN TTL reduced from 24h to 1h - [x] LiveKit TURN TTL reduced to 1h
- [x] LiveKit VP9/AV1 codecs enabled (`video_codecs: [VP8, H264, VP9, AV1]`) - [x] LiveKit VP9/AV1 codecs enabled
- [ ] BBR congestion control — must be applied on Proxmox host, not inside LXC (see Known Issues) - [ ] BBR congestion control — must be applied on Proxmox host
### Auth & SSO ### Auth & SSO
- [x] Token-based registration - [x] Token-based registration
@@ -453,431 +230,64 @@ cp -r dist/* /var/www/html/
- [x] Password auth alongside SSO - [x] Password auth alongside SSO
### Webhooks & Integrations ### Webhooks & Integrations
- [x] matrix-hookshot 7.3.2 installed and running - [x] matrix-hookshot 7.3.2 — 11 active webhook services
- [x] Generic webhook bridge for 11 active services (Grafana, Proxmox, Sonarr, Radarr, Readarr, Lidarr, Uptime Kuma, Seerr, Owncast/Livestream, Bazarr, Tinker-Tickets) - [x] Per-service JS transformation functions
- [x] Per-service JS transformation functions — all rewritten to handle full event payloads (all event types, health alerts, app updates, release groups, download clients)
- [x] Per-service virtual user avatars - [x] Per-service virtual user avatars
- [x] NPM reverse proxy for `/webhook` path - [x] NPM reverse proxy for `/webhook` path
- [x] Tinker Tickets custom transformation code
### Room Structure ### Room Structure
- [x] The Lotus Guild space - [x] The Lotus Guild space with all core rooms
- [x] All core rooms with correct power levels and join rules - [x] Correct power levels and join rules per room
- [x] Spam and Stuff room for service notifications (hookshot)
- [x] Custom room avatars - [x] Custom room avatars
### Hardening ### Hardening
- [x] Rate limiting - [x] Rate limiting
- [x] E2EE on all rooms (except Spam and Stuff — intentional for hookshot) - [x] E2EE on all rooms (except Spam and Stuff — intentional for hookshot)
- [x] coturn internal peer deny rules (blocks relay to RFC1918 except allowed subnet) - [x] coturn internal peer deny rules (blocks relay to RFC1918 except allowed subnet)
- [x] `pg_hba.conf` locked down — remote access restricted to Synapse LXC (10.10.10.29) only - [x] coturn hardening: `stale-nonce=600`, `user-quota=100`, `total-quota=1000`, strong cipher list
- [x] Federation enabled with key verification (open for invite-only growth to friends/family/coworkers) - [x] `pg_hba.conf` locked down — remote access restricted to Synapse LXC only
- [x] fail2ban on Synapse login endpoint (5 retries / 24h ban, LXC 151) - [x] Federation open with key verification
- [x] Synapse metrics port 9000 restricted to `127.0.0.1` + `10.10.10.29` (was `0.0.0.0`) - [x] fail2ban on Synapse login endpoint (5 retries / 24h ban)
- [x] coturn cert auto-renewal — daily sync cron on compute-storage-01 copies NPM cert → coturn - [x] Synapse metrics port 9000 restricted to `127.0.0.1` + `10.10.10.29`
- [x] `/.well-known/matrix/client` and `/server` live on lotusguild.org (NPM advanced config) - [x] coturn cert auto-renewal — daily sync cron on compute-storage-01
- [x] `suppress_key_server_warning: true` in homeserver.yaml - [x] `/.well-known/matrix/client` and `/server` live on lotusguild.org
- [ ] Federation allow/deny lists for known bad actors - [x] `suppress_key_server_warning: true`
- [ ] Regular Synapse updates
- [x] Automated database + media backups - [x] Automated database + media backups
- [x] Federation bad-actor blocking via Draupnir ban lists (17,000+ entries)
### Monitoring ### Monitoring
- [x] Synapse metrics endpoint (port 9000, Prometheus-compatible) - [x] Grafana dashboard — `dashboard.lotusguild.org/d/matrix-synapse-dashboard` (140+ panels)
- [x] Uptime Kuma monitors added: Synapse HTTP, LiveKit TCP, PostgreSQL TCP, Cinny Web, coturn TCP 3478, lk-jwt-service, Hookshot - [x] Prometheus scraping all Matrix services (Synapse, Hookshot, LiveKit, node_exporter, postgres)
- [ ] Uptime Kuma: coturn UDP STUN monitoring (requires push/heartbeat — no native UDP type in Kuma) - [x] 14 active alert rules across matrix-folder and infra-folder
- [x] Grafana dashboard — custom Synapse dashboard at `dashboard.lotusguild.org/d/matrix-synapse-dashboard/matrix-synapse` (140+ panels, see Monitoring section below) - [x] Uptime Kuma monitors: Synapse, LiveKit, PostgreSQL, Cinny, coturn, lk-jwt-service, Hookshot
- [x] Prometheus scraping all Matrix services: Synapse, Hookshot, LiveKit, matrix-node, postgres-node, matrix-admin, postgres, postgres-exporter
- [x] node_exporter installed on LXC 151 (Matrix) and LXC 109 (PostgreSQL)
- [x] LiveKit Prometheus metrics enabled (`prometheus_port: 6789`)
- [x] Hookshot metrics enabled (`metrics: { enabled: true }`) on dedicated port 9004
- [x] Grafana alert rules — 9 Matrix/infra alerts active (see Alert Rules section below)
- [x] Duplicate Grafana "Infrastructure" folder merged and deleted
### Admin ### Admin
- [x] Synapse admin API dashboard (synapse-admin at http://10.10.10.29:8080) - [x] Synapse admin API dashboard (synapse-admin at http://10.10.10.29:8080)
- [x] Power levels per room - [x] Draupnir moderation bot — LXC 110, v2.9.0, 9 protected rooms, 2 ban lists
- [x] Draupnir moderation bot — LXC 110 (10.10.10.24), v2.9.0, management room `#management:matrix.lotusguild.org` - [ ] Cinny custom branding
- [ ] Cinny custom branding (Lotus Guild theme — colors, title, favicon, PWA name)
- [ ] **Storj node update**`storj_uptodate=0` on LXC 138 (10.10.10.133), risk of disqualification - [ ] **Storj node update**`storj_uptodate=0` on LXC 138 (10.10.10.133), risk of disqualification
--- ---
## Improvement Audit (March 2026) ## Monitoring & Observability
Comprehensive audit of the current infrastructure against official documentation and security best practices. Applied March 9 2026.
### Priority Summary
| Issue | Severity | Status |
|-------|----------|--------|
| coturn TLS cert expires May 12 — no auto-renewal | **CRITICAL** | ✅ Fixed — daily sync cron on compute-storage-01 copies NPM-renewed cert to coturn |
| Synapse metrics port 9000 bound to `0.0.0.0` | **HIGH** | ✅ Fixed — now binds `127.0.0.1` + `10.10.10.29` (Prometheus still works, internet blocked) |
| `/.well-known/matrix/client` returns 404 | MEDIUM | ✅ Fixed — NPM lotusguild.org proxy host updated, live at `https://lotusguild.org/.well-known/matrix/client` |
| `suppress_key_server_warning` not set | MEDIUM | ✅ Fixed — added to homeserver.yaml |
| No fail2ban on `/_matrix/client/.*/login` | MEDIUM | ✅ Fixed — fail2ban installed, matrix-synapse jail active (5 retries / 24h ban) |
| No media purge cron (retention policy set but never triggers) | MEDIUM | ✅ N/A — `media_retention` block already in homeserver.yaml; Synapse runs the purge internally on schedule |
| PostgreSQL autovacuum not tuned per-table | LOW | ✅ Fixed — all 5 high-churn tables tuned, `autovacuum_max_workers` → 5 |
| Hookshot metrics scrape unconfirmed | LOW | ✅ Fixed — `metrics: { enabled: true }` added to config, metrics split to dedicated port 9004, Prometheus scraping confirmed |
| LiveKit VP9/AV1 codec support | LOW | ✅ Applied — `video_codecs: [VP8, H264, VP9, AV1]` added to livekit config |
| Federation allow/deny list not configured | LOW | Pending — Mjolnir/Draupnir on roadmap |
| Sygnal push notifications not deployed | INFO | Deferred |
---
### 1. coturn Cert Auto-Renewal ✅
The coturn cert is managed by NPM (cert ID 91, stored at `/etc/letsencrypt/live/npm-91/` on LXC 139). NPM renews it automatically. A sync script on `compute-storage-01` detects when NPM renews and copies it to coturn.
**Deployed:** `/usr/local/bin/coturn-cert-sync.sh` on compute-storage-01, cron `/etc/cron.d/coturn-cert-sync` (runs 03:30 daily).
Script compares cert expiry dates between LXC 139 and LXC 151. If they differ (NPM renewed), it copies `fullchain.pem` + `privkey.pem` and restarts coturn.
**Additional coturn hardening — ✅ Applied March 2026:**
```
# /etc/turnserver.conf
stale-nonce=600 # Nonce expires 600s (prevents replay attacks)
user-quota=100 # Max concurrent relay allocations per user
total-quota=1000 # Total relay allocations server-wide
cipher-list=ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-CHACHA20-POLY1305
```
---
### 2. Synapse Configuration Gaps
**a) Metrics port exposed to 0.0.0.0 (HIGH)**
Port 9000 currently binds `0.0.0.0` — exposes internal state, user counts, DB query times externally. Fix in `homeserver.yaml`:
```yaml
metrics_flags:
some_legacy_unrestricted_resources: false
listeners:
- port: 9000
bind_addresses: ['127.0.0.1'] # NOT 0.0.0.0
type: metrics
resources: []
```
Grafana at `10.10.10.49` scrapes port 9000 from within the VLAN so this is safe to lock down.
**b) suppress_key_server_warning (MEDIUM)**
Fills Synapse logs with noise on every restart. One line in `homeserver.yaml`:
```yaml
suppress_key_server_warning: true
```
**c) Database connection pooling (LOW — track for growth)**
Current defaults (`cp_min: 5`, `cp_max: 10`) are fine for single-process. When adding workers, increase `cp_max` to 2030 per worker group. Add explicitly to `homeserver.yaml` to make it visible:
```yaml
database:
name: psycopg2
args:
cp_min: 5
cp_max: 10
```
---
### 3. Matrix Well-Known 404
`/.well-known/matrix/client` returns 404. This breaks client autodiscovery — users who type `lotusguild.org` instead of `matrix.lotusguild.org` get an error. Fix in NPM with a custom location block on the `lotusguild.org` proxy host:
```nginx
location /.well-known/matrix/client {
add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *;
return 200 '{"m.homeserver":{"base_url":"https://matrix.lotusguild.org"}}';
}
location /.well-known/matrix/server {
add_header Content-Type application/json;
add_header Access-Control-Allow-Origin *;
return 200 '{"m.server":"matrix.lotusguild.org:443"}';
}
```
---
### 4. fail2ban for Synapse Login
No brute-force protection on `/_matrix/client/*/login`. Easy win.
**`/etc/fail2ban/jail.d/matrix-synapse.conf`:**
```ini
[matrix-synapse]
enabled = true
port = http,https
filter = matrix-synapse
logpath = /var/log/matrix-synapse/homeserver.log
backend = systemd
journalmatch = _SYSTEMD_UNIT=matrix-synapse.service + PRIORITY=3
findtime = 600
maxretry = 5
bantime = 86400
```
**`/etc/fail2ban/filter.d/matrix-synapse.conf`:**
```ini
[Definition]
failregex = ^.*Failed \(password\|SAML\) login attempt for user .* from <HOST>.*$
^.*"POST /.*login.*" 401.*$
ignoreregex = ^.*"GET /sync.*".*$
```
---
### 5. Synapse Media Purge Cron
Retention policy is configured (remote 1yr, local 3yr) but nothing actually triggers the purge — media accumulates silently. The Synapse admin API purge endpoint must be called explicitly.
**`/usr/local/bin/purge-synapse-media.sh`** (create on LXC 151):
```bash
#!/bin/bash
ADMIN_TOKEN="syt_your_admin_token"
# Purge remote media (cached from other homeservers) older than 90 days
CUTOFF_TS=$(($(date +%s000) - 7776000000))
curl -X POST \
"http://localhost:8008/_synapse/admin/v1/purge_media_cache?before_ts=$CUTOFF_TS" \
-H "Authorization: Bearer $ADMIN_TOKEN" \
-s -o /dev/null
echo "$(date): Synapse remote media purge completed" >> /var/log/synapse-purge.log
```
```bash
chmod +x /usr/local/bin/purge-synapse-media.sh
echo "0 4 * * * root /usr/local/bin/purge-synapse-media.sh" > /etc/cron.d/synapse-purge
```
---
### 6. PostgreSQL Autovacuum Per-Table Tuning
The high-churn Synapse tables (`state_groups_state`, `events`, `receipts`) are not tuned for aggressive autovacuum. As the DB grows, bloat accumulates and queries slow down. Run on LXC 109 (PostgreSQL):
```sql
-- state_groups_state: biggest bloat source
ALTER TABLE state_groups_state SET (
autovacuum_vacuum_scale_factor = 0.01,
autovacuum_analyze_scale_factor = 0.005,
autovacuum_vacuum_cost_delay = 5,
autovacuum_naptime = 30
);
-- events: second priority
ALTER TABLE events SET (
autovacuum_vacuum_scale_factor = 0.02,
autovacuum_analyze_scale_factor = 0.01,
autovacuum_vacuum_cost_delay = 5,
autovacuum_naptime = 30
);
-- receipts and device_lists_stream
ALTER TABLE receipts SET (autovacuum_vacuum_scale_factor = 0.01, autovacuum_vacuum_cost_delay = 5);
ALTER TABLE device_lists_stream SET (autovacuum_vacuum_scale_factor = 0.02);
ALTER TABLE presence_stream SET (autovacuum_vacuum_scale_factor = 0.02);
```
Also bump `autovacuum_max_workers` from 3 → 5:
```sql
ALTER SYSTEM SET autovacuum_max_workers = 5;
SELECT pg_reload_conf();
```
**Monitor vacuum health:**
```sql
SELECT relname, last_autovacuum, n_dead_tup, n_live_tup
FROM pg_stat_user_tables
WHERE relname IN ('events', 'state_groups_state', 'receipts')
ORDER BY n_dead_tup DESC;
```
---
### 7. Hookshot Metrics + Grafana
**Hookshot metrics** are exposed at `127.0.0.1:9001/metrics` but it's unconfirmed whether Prometheus at `10.10.10.49` is scraping them. Verify:
```bash
# On LXC 151
curl http://127.0.0.1:9001/metrics | head -20
```
If Prometheus is scraping, add the hookshot dashboard from the repo:
`contrib/hookshot-dashboard.json` → import into Grafana.
**Grafana Synapse dashboard** — Prometheus is already scraping Synapse at port 9000. Import the official dashboard:
- Grafana → Dashboards → Import → ID `18618` (Synapse Monitoring)
- Set Prometheus datasource → done
- Shows room count, message rates, federation lag, cache hit rates, DB query times in real time
---
### 8. Federation Security
Currently: open federation with key verification (correct for invite-only friends server). Recommended additions:
**Server-level allow/deny in `homeserver.yaml`** (optional, for closing federation entirely):
```yaml
# Fully closed (recommended long-term for private guild):
federation_enabled: false
# OR: whitelist-only federation
federation_domain_whitelist:
- matrix.lotusguild.org
- matrix.org # Keep if bridging needed
```
**Per-room ACLs** for reactive blocking of specific bad servers:
```json
{
"type": "m.room.server_acl",
"content": {
"allow": ["*"],
"deny": ["spam.example.com"]
}
}
```
**Mjolnir/Draupnir** (already on roadmap) handles this automatically with ban list subscriptions (t2bot spam lists etc).
---
### 9. Sygnal Push Notifications
Sygnal is the official Matrix push gateway for mobile (Element X on iOS/Android). Without it, notifications don't arrive when the app is backgrounded.
**Requirements:**
- Apple Developer account (APNS cert) for iOS
- Firebase project (FCM API key) for Android
- New LXC or run alongside existing services
**Basic config (`/etc/sygnal/sygnal.yaml`):**
```yaml
server:
port: 8765
database:
type: postgresql
user: sygnal
password: <password>
database: sygnal
apps:
com.element.android:
type: gcm
api_key: <FCM_API_KEY>
im.riot.x.ios:
type: apns
platform: production
certfile: /etc/sygnal/apns/element-x-cert.pem
topic: im.riot.x.ios
```
**Synapse integration:**
```yaml
# homeserver.yaml
push:
push_gateways:
- url: "http://localhost:8765"
```
---
### 10. LiveKit VP9/AV1 + Dynacast (Quality Improvement)
Currently H264 only. Enabling VP9/AV1 unlocks Dynacast (pauses video layers no one is watching) which significantly reduces bandwidth/CPU for low-viewer rooms.
**`/etc/livekit/config.yaml` additions:**
```yaml
video:
codecs:
- mime: video/H264
fmtp: "level-asymmetry-allowed=1;packetization-mode=1;profile-level-id=42e01e"
- mime: video/VP9
fmtp: "profile=0"
- mime: video/AV1
fmtp: "profile=0"
dynacast: true
```
Note: Dynacast only works with VP9 or AV1 (SVC-capable codecs). H264 subscribers continue to work normally alongside VP9/AV1 subscribers.
---
### 11. Synapse Workers (Future Scaling Reference)
Current single-process handles ~100300 concurrent users before the Python GIL becomes the bottleneck. Not needed now, but documented for when usage grows.
**Stage 1 trigger:** Synapse CPU >80% consistently, or >200 concurrent users.
**First workers to add:**
```yaml
# /etc/matrix-synapse/workers/client-reader-1.yaml
worker_app: synapse.app.client_reader
worker_name: client-reader-1
worker_listeners:
- type: http
port: 8011
resources: [{names: [client]}]
```
Add `federation_sender` next (off-loads outgoing federation from main process). Then `event_creator` for write-heavy loads. Redis required at Stage 2 (500+ users) for inter-worker coordination.
---
---
## Monitoring & Observability (March 2026)
### Prometheus Scrape Jobs ### Prometheus Scrape Jobs
All Matrix-related services scraped by Prometheus at `10.10.10.48` (LXC 118):
| Job | Target | Metrics | | Job | Target | Metrics |
|-----|--------|---------| |-----|--------|---------|
| `synapse` | `10.10.10.29:9000` | Full Synapse internals (events, federation, caches, DB, HTTP) | | `synapse` | `10.10.10.29:9000` | Full Synapse internals |
| `matrix-admin` | `10.10.10.29:9101` | DAU, MAU, room/user/media totals | | `matrix-admin` | `10.10.10.29:9101` | DAU, MAU, room/user/media totals |
| `livekit` | `10.10.10.29:6789` | Rooms, participants, packets, forward latency, quality | | `livekit` | `10.10.10.29:6789` | Rooms, participants, packets, latency |
| `hookshot` | `10.10.10.29:9004` | Connections by service, API calls/failures, Node.js runtime | | `hookshot` | `10.10.10.29:9004` | Connections, API calls/failures, Node.js runtime |
| `matrix-node` | `10.10.10.29:9100` | CPU, RAM, network, disk space, load avg (Matrix LXC host) | | `matrix-node` | `10.10.10.29:9100` | CPU, RAM, network, load average, disk |
| `postgres` | `10.10.10.44:9187` | pg_stat_database, connections, WAL, block I/O | | `postgres` | `10.10.10.44:9187` | pg_stat_database, connections, WAL, block I/O |
| `postgres-node` | `10.10.10.44:9100` | CPU, RAM, network, disk space, load avg (PostgreSQL LXC host) | | `postgres-node` | `10.10.10.44:9100` | CPU, RAM, network, load average, disk |
| `postgres-exporter-2` | `10.10.10.160:9711` | Secondary postgres exporter |
> **Disk I/O note:** All servers use Ceph-backed storage. Per-device disk I/O metrics are meaningless; use Network I/O panels to see actual storage traffic. > **Disk I/O:** All servers use Ceph-backed storage. Per-device disk I/O metrics are meaningless use Network I/O panels to see actual storage traffic.
### Grafana Dashboard
**URL:** `https://dashboard.lotusguild.org/d/matrix-synapse-dashboard/matrix-synapse`
140+ panels across 18 sections:
| Section | Key panels |
|---------|-----------|
| Synapse Overview | Up status, users, rooms, DAU/MAU, media, federation peers |
| Synapse Process Health | CPU, memory, FDs, thread pool, GC, Twisted reactor |
| HTTP API Requests | Rate, response codes, p99/p50 latency, in-flight, DB txn time |
| Federation | Outgoing/incoming PDUs, queue depth, staging, known servers |
| Events & Rooms | Event persistence, notifier, sync responses |
| Presence & Push | Presence updates, pushers, state transitions |
| Rate Limiting | Rejections, sleeps, queue wait time p99 |
| Users & Registration | Login rate, registration rate, growth over time |
| Synapse Database Performance | Txn rate/duration, schedule latency, query latency |
| Synapse Caches | Hit rate (top 5), sizes, evictions, response cache |
| Event Processing & Lag | Lag by processor, stream positions, event fetch ongoing |
| State Resolution | Forward extremities, state resolution CPU, state groups |
| App Services (Hookshot) | Events sent, transactions sent vs failed |
| HTTP Push | Push processed vs failed, badge updates |
| Sliding Sync & Slow Endpoints | Sliding sync p99, slowest endpoints, rate limit wait |
| Background Processes | In-flight by name, start rate, CPU, scheduler tasks |
| PostgreSQL Database | Size, connections, transactions, block I/O, WAL, locks |
| LiveKit SFU | Rooms, participants, network, packets out/dropped, forward latency |
| Hookshot | Matrix API calls/failures, active connections, Node.js event loop lag |
| Matrix LXC Host | CPU, RAM, network (incl. Ceph), load average, disk space |
| PostgreSQL LXC Host | CPU, RAM, network (incl. Ceph), load average, disk space |
### Alert Rules ### Alert Rules
All alerts are Grafana-native (Alerting → Alert Rules). Current active rules: **Matrix folder:**
**Matrix folder (`matrix-folder`):**
| Alert | Fires when | Severity | | Alert | Fires when | Severity |
|-------|-----------|----------| |-------|-----------|----------|
| Synapse Down | `up{job="synapse"}` < 1 for 2m | critical | | Synapse Down | `up{job="synapse"}` < 1 for 2m | critical |
@@ -891,7 +301,7 @@ All alerts are Grafana-native (Alerting → Alert Rules). Current active rules:
| Synapse Event Processing Lag | any processor > 30s behind for 5m | warning | | Synapse Event Processing Lag | any processor > 30s behind for 5m | warning |
| Synapse DB Query Latency High | p99 query time > 1s for 5m | warning | | Synapse DB Query Latency High | p99 query time > 1s for 5m | warning |
**Infrastructure folder (`infra-folder`):** **Infrastructure folder:**
| Alert | Fires when | Severity | | Alert | Fires when | Severity |
|-------|-----------|----------| |-------|-----------|----------|
| Service Exporter Down | any `up == 0` for 3m | critical | | Service Exporter Down | any `up == 0` for 3m | critical |
@@ -899,18 +309,9 @@ All alerts are Grafana-native (Alerting → Alert Rules). Current active rules:
| Node High Memory Usage | RAM > 90% for 10m | warning | | Node High Memory Usage | RAM > 90% for 10m | warning |
| Node Disk Space Low | available < 15% (excl. tmpfs/overlay) for 10m | warning | | Node Disk Space Low | available < 15% (excl. tmpfs/overlay) for 10m | warning |
**Prometheus rules (`/etc/prometheus/prometheus_rules.yml`):** > **`/sync` long-poll:** The Matrix `/sync` endpoint is a long-poll (clients hold it open ≤30s). It is excluded from the High Response Time alert to prevent false positives.
| Alert | Fires when |
|-------|-----------|
| InstanceDown | any `up == 0` for 1m |
| DiskSpaceFree10Percent | available < 10% (excl. tmpfs/overlay) for 5m |
> **`/sync` long-poll note:** The Matrix `/sync` endpoint is a long-poll (clients hold it open ≤30s). It is excluded from the High Response Time alert to prevent false positives. Without exclusion, p99 reads ~10s even when the server is healthy. > **Synapse Event Processing Lag** can fire transiently after a Synapse restart while processors drain their backlog. Self-resolves in 1020 minutes.
### Known Alert False Positives / Watch Items
- **Synapse Event Processing Lag** — can fire transiently after Synapse restart while processors catch up on backlog. Self-resolves in 1020 minutes. If it grows continuously (>10 min) and doesn't plateau, restart Synapse.
- **Node Disk Space Low** — excludes `tmpfs`, `overlay`, `squashfs`, `devtmpfs`, and `/boot`/`/run` mounts. If new filesystem types appear, add them to the `fstype!~` filter in the rule.
--- ---
@@ -923,7 +324,6 @@ All alerts are Grafana-native (Alerting → Alert Rules). Current active rules:
- [x] Initial sync token (ignores old messages on startup) - [x] Initial sync token (ignores old messages on startup)
- [x] Auto-accept room invites - [x] Auto-accept room invites
- [x] Deployed as systemd service (`matrixbot.service`) on LXC 151 - [x] Deployed as systemd service (`matrixbot.service`) on LXC 151
- [x] Fix E2EE key errors — full store + credentials wipe, fresh device registration (`BBRZSEUECZ`); stale devices removed via admin API
### Commands ### Commands
- [x] `!help` — list commands - [x] `!help` — list commands
@@ -964,15 +364,16 @@ All alerts are Grafana-native (Alerting → Alert Rules). Current active rules:
|-----------|-----------|---------| |-----------|-----------|---------|
| Bot language | Python 3 | 3.x | | Bot language | Python 3 | 3.x |
| Bot library | matrix-nio (E2EE) | latest | | Bot library | matrix-nio (E2EE) | latest |
| Homeserver | Synapse | 1.148.0 | | Homeserver | Synapse | 1.149.0 |
| Database | PostgreSQL | 17.9 | | Database | PostgreSQL | 17.9 |
| TURN | coturn | latest | | TURN | coturn | latest |
| Video/voice calls | LiveKit SFU | 1.9.11 | | Video/voice calls | LiveKit SFU | 1.9.11 |
| LiveKit JWT | lk-jwt-service | latest | | LiveKit JWT | lk-jwt-service | latest |
| Moderation | Draupnir | 2.9.0 |
| SSO | Authelia (OIDC) + LLDAP | — | | SSO | Authelia (OIDC) + LLDAP | — |
| Webhook bridge | matrix-hookshot | 7.3.2 | | Webhook bridge | matrix-hookshot | 7.3.2 |
| Reverse proxy | Nginx Proxy Manager | — | | Reverse proxy | Nginx Proxy Manager | — |
| Web client | Cinny (custom build, `add-joined-call-controls` branch) | 4.10.5+ | | Web client | Cinny (`add-joined-call-controls` branch) | 4.10.5 |
| Bot dependencies | matrix-nio[e2ee], aiohttp, python-dotenv, mcrcon | — | | Bot dependencies | matrix-nio[e2ee], aiohttp, python-dotenv, mcrcon | — |
## Bot Files ## Bot Files