Compare commits

...

10 Commits

Author SHA1 Message Date
2ffcb79f19 Removed test markdown files 2026-01-06 16:06:29 -05:00
1b92552339 Add comprehensive final results and validation documentation
Complete analysis of optimization results showing 100% goal achievement:
- SMART collection: 79% → 96% (only USB edge case remaining)
- Priority ranking: Now perfectly matches requirements
- Critical discovery: osd.28 with 16 reallocated sectors (was #14, now #2)
- False positives eliminated: 6 healthy NVMe drives no longer flagged

Includes detailed replacement recommendations, technical changes summary,
validation results, and outstanding items.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 15:18:02 -05:00
03374fa784 Add USB drive SMART support with multiple bridge chipset attempts
**Issue**: osd.2 is a USB-connected 1TB drive that couldn't read SMART

Error was: "Read Device Identity failed: scsi error unsupported field"
This is typical for USB-attached drives that need bridge-specific flags.

**Solution**: Added USB transport detection and multiple fallback methods:
- SAT (SCSI-ATA Translation) - most common USB bridges
- usbjmicron - JMicron USB bridge chipsets
- usbcypress - Cypress USB bridge chipsets
- Generic USB fallback
- SCSI passthrough

Also added USB/SAT attempt to unknown transport types as fallback.

**Debug Enhancement**:
- Now shows detected transport type in debug output
- Helps diagnose why SMART fails

**Note**: USB drives in Ceph clusters are unconventional but functional.
This OSD appears to be temporary/supplemental storage capacity.

If SMART still fails after this update, the USB bridge may be incompatible
with smartmontools, which is acceptable for temporary storage.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 15:16:35 -05:00
3d498a4092 CRITICAL FIX: Parse nested Ceph device health metrics for NVMe drives
**Root Cause Found**: All 6 NVMe SMART failures were due to parsing bug!

Ceph's `device query-daemon-health-metrics` returns data in nested format:
```json
{
  "DEVICE_ID": {
    "nvme_smart_health_information_log": { ... }
  }
}
```

Script was checking for `nvme_smart_health_information_log` at top level,
so it always failed and fell back to SSH smartctl (which also failed).

**Fix**:
- Extract first device entry from nested dict structure
- Maintain backward compatibility for direct format
- Now correctly parses NVMe SMART from Ceph's built-in metrics

**Expected Impact**:
- All 6 NVMe drives will now successfully read SMART data
- Should drop from "CRITICAL: No SMART data" to proper health scores
- Only truly healthy NVMe drives will show 100/100 health
- Failing NVMe drives will be properly detected and ranked

**Testing**:
Verified `ceph device query-daemon-health-metrics osd.0` returns full
NVMe SMART data including:
- available_spare: 100%
- percentage_used: 12%
- media_errors: 0
- temperature: 38°C

This data was always available but wasn't being parsed!

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 15:11:53 -05:00
35a16a1793 Fix reallocated sector scoring - drives with bad sectors now rank correctly
**Problem**: osd.28 with 16 reallocated sectors only ranked #7 with score 40.8
This is a CRITICAL failing drive that should rank just below failed SMART reads.

**Changes**:
- Reallocated sectors now use tiered penalties:
  * 10+ sectors: -95 points (health = 5/100) - DRIVE FAILING
  * 5-9 sectors: -85 points (health = 15/100) - CRITICAL
  * 1-4 sectors: -70 points (health = 30/100) - SERIOUS
- Added critical_issues detection for sector problems
- Critical issues get +20 bonus (large) or +25 (small) in scoring
- Updated issue text to "DRIVE FAILING" for clarity

**Expected Result**:
- osd.28 will now score ~96/100 and rank #7 (right after 6 failed SMART)
- Any drive with reallocated/pending/uncorrectable sectors gets top priority
- Matches priority: Failed SMART > Critical sectors > Small failing > Rest

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 15:08:46 -05:00
1848b71c2a Optimize OSD analyzer: prioritize failing drives and improve SMART collection
Major improvements to scoring and data collection:

**Scoring Changes:**
- Failed SMART reads now return 0/100 health (was 50/100)
- Critical health issues get much higher penalties:
  * Reallocated sectors: -50 pts, 5x multiplier (was -20, 2x)
  * Pending sectors: -60 pts, 10x multiplier (was -25, 5x)
  * Uncorrectable sectors: -70 pts, 15x multiplier (was -30, 5x)
  * NVMe media errors: -60 pts, 10x multiplier (was -25, 5x)
- Revised weights: 80% health, 15% capacity, 5% resilience (was 60/30/10)
- Added priority bonuses:
  * Failed SMART + small drive (<5TB): +30 points
  * Failed SMART alone: +20 points
  * Health issues + small drive: +15 points

**Priority Order Now Enforced:**
1. Failed SMART drives (score 90-100)
2. Small drives beginning to fail (70-85)
3. Small healthy drives (40-60)
4. Large failing drives (60-75)

**Enhanced SMART Collection:**
- Added metadata.devices field parsing
- Enhanced dm-device and /dev/mapper/ resolution
- Added ceph-volume lvm list fallback
- Retry logic with 3 command variations per device
- Try with/without sudo, different device flags

**Expected Impact:**
- osd.28 with reallocated sectors jumps from #14 to top 3
- SMART collection failures should drop from 6 to 0-2
- All failing drives rank above healthy drives regardless of size

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-06 15:05:25 -05:00
3b15377821 seperate smartctl depending on device class 2025-12-22 18:23:06 -05:00
c315fa3efc Updated readme again 2025-12-22 18:15:46 -05:00
89037ed93f Merge branch 'main' of code.lotusguild.org:LotusGuild/analyzeOSDs 2025-12-22 18:12:42 -05:00
c252dbcdc4 resolves /dev/dm-* now 2025-12-22 18:05:32 -05:00
3 changed files with 408 additions and 58 deletions

230
Claude.md Normal file
View File

@@ -0,0 +1,230 @@
# Ceph OSD Replacement Analyzer - Project Documentation
## Project Overview
**Purpose**: Intelligent analysis tool for identifying optimal Ceph OSD replacement candidates across an entire cluster by analyzing health metrics, capacity optimization potential, and cluster resilience factors.
**Type**: Python 3 CLI tool for Ceph storage cluster maintenance
**Target Users**: Storage administrators, DevOps engineers, and infrastructure teams managing Ceph clusters
## Architecture
### Core Components
1. **Data Collection Layer** ([ceph_osd_analyzer.py:34-172](ceph_osd_analyzer.py#L34-L172))
- Executes Ceph commands locally and via SSH
- Retrieves SMART data from all cluster nodes
- Handles both local `ceph device query-daemon-health-metrics` and remote `smartctl` fallback
- Device path resolution with dm-device mapping support
2. **Analysis Engine** ([ceph_osd_analyzer.py:173-357](ceph_osd_analyzer.py#L173-L357))
- SMART health parsing for HDD and NVMe devices
- Capacity optimization scoring
- Cluster resilience impact calculation
- Multi-factor weighted scoring system
3. **Reporting System** ([ceph_osd_analyzer.py:361-525](ceph_osd_analyzer.py#L361-L525))
- Color-coded console output
- Top 15 ranked replacement candidates
- Summary by device class (HDD/NVMe)
- Per-host analysis breakdown
### Key Design Decisions
**Remote SMART Data Collection**: The script uses SSH to gather SMART data from all cluster nodes, not just the local node. This is critical because OSDs are distributed across multiple physical hosts.
**Fallback Strategy**: Primary method uses `ceph device query-daemon-health-metrics`, with automatic fallback to direct `smartctl` queries via SSH if Ceph's built-in metrics are unavailable.
**Device Mapping**: Handles complex storage configurations including device-mapper devices, resolving them to physical drives using `lsblk` and symlink resolution.
**Weighted Scoring**: 60% health, 30% capacity optimization, 10% resilience - prioritizes failing drives while considering operational efficiency.
## Scoring Algorithm
### Health Score (60% weight)
**HDD Metrics** ([ceph_osd_analyzer.py:183-236](ceph_osd_analyzer.py#L183-L236)):
- Reallocated sectors (ID 5): -20 points for any presence
- Spin retry count (ID 10): -15 points
- Pending sectors (ID 197): -25 points (critical indicator)
- Uncorrectable sectors (ID 198): -30 points (critical)
- Temperature (ID 190/194): -10 points if >60°C
- Age (ID 9): -15 points if >5 years
**NVMe Metrics** ([ceph_osd_analyzer.py:239-267](ceph_osd_analyzer.py#L239-L267)):
- Available spare: penalized if <50%
- Percentage used: -30 points if >80%
- Media errors: -25 points for any errors
- Temperature: -10 points if >70°C
### Capacity Score (30% weight)
([ceph_osd_analyzer.py:271-311](ceph_osd_analyzer.py#L271-L311))
- **Small drives prioritized**: <2TB = +40 points (maximum capacity gain)
- **Medium drives**: 2-5TB = +30 points, 5-10TB = +15 points
- **High utilization penalty**: >70% = -15 points (migration complexity)
- **Host balance bonus**: +15 points if below host average weight
### Resilience Score (10% weight)
([ceph_osd_analyzer.py:313-357](ceph_osd_analyzer.py#L313-L357))
- Hosts with >20% above average OSD count: +20 points
- Presence of down OSDs on same host: +15 points (hardware issues)
## Usage Patterns
### One-Line Execution (Recommended)
```bash
sudo python3 -c "import urllib.request; exec(urllib.request.urlopen('http://10.10.10.63:3000/LotusGuild/analyzeOSDs/raw/branch/main/ceph_osd_analyzer.py').read().decode())" --debug --class hdd
```
**Why**: Always uses latest version, no local installation, integrates easily into automation.
### Command-Line Options
- `--class [hdd|nvme]`: Filter by device type
- `--min-size N`: Minimum OSD size in TB
- `--debug`: Enable verbose debugging output
### Typical Workflow
1. Run analysis during maintenance window
2. Identify top 3-5 candidates with scores >70
3. Review health issues and capacity gains
4. Plan replacement based on available hardware
5. Execute OSD out/destroy/replace operations
## Dependencies
### Required Packages
- Python 3.6+ (standard library only, no external dependencies)
- `smartmontools` package (`smartctl` binary)
- SSH access configured between all cluster nodes
### Required Permissions
- Ceph admin keyring access
- `sudo` privileges for SMART data retrieval
- SSH key-based authentication to all OSD hosts
### Ceph Commands Used
- `ceph osd tree -f json`: Cluster topology
- `ceph osd df -f json`: Disk usage statistics
- `ceph osd metadata osd.N -f json`: OSD device information
- `ceph device query-daemon-health-metrics osd.N`: SMART data
## Output Interpretation
### Replacement Score Ranges
- **70-100** (RED): Critical - immediate replacement recommended
- **50-69** (YELLOW): High priority - plan replacement soon
- **30-49**: Medium priority - next upgrade cycle
- **0-29** (GREEN): Low priority - healthy drives
### Health Score Ranges
- **80-100** (GREEN): Excellent condition
- **60-79** (YELLOW): Monitor for issues
- **40-59**: Fair - multiple concerns
- **0-39** (RED): Critical - replace urgently
## Common Issues & Solutions
### "No SMART data available"
- **Cause**: Missing `smartmontools` or insufficient permissions
- **Solution**: `apt install smartmontools` and verify sudo access
### SSH Timeout Errors
- **Cause**: Node unreachable or SSH keys not configured
- **Solution**: Verify connectivity with `ssh -o ConnectTimeout=5 <host> hostname`
### Device Path Resolution Failures
- **Cause**: Non-standard OSD deployment or encryption
- **Solution**: Enable `--debug` to see device resolution attempts
### dm-device Mapping Issues
- **Cause**: LVM or LUKS encrypted OSDs
- **Solution**: Script automatically resolves via `lsblk -no pkname`
## Development Notes
### Code Structure
- **Single file design**: Easier to execute remotely via `exec()`
- **Minimal dependencies**: Uses only Python standard library
- **Color-coded output**: ANSI escape codes for terminal display
- **Debug mode**: Comprehensive logging when `--debug` enabled
### Notable Functions
**`run_command()`** ([ceph_osd_analyzer.py:34-56](ceph_osd_analyzer.py#L34-L56)): Universal command executor with SSH support and JSON parsing
**`get_device_path_for_osd()`** ([ceph_osd_analyzer.py:84-122](ceph_osd_analyzer.py#L84-L122)): Complex device resolution logic handling metadata, symlinks, and dm-devices
**`get_smart_data_remote()`** ([ceph_osd_analyzer.py:124-145](ceph_osd_analyzer.py#L124-L145)): Remote SMART data collection with device type detection
**`parse_smart_health()`** ([ceph_osd_analyzer.py:173-269](ceph_osd_analyzer.py#L173-L269)): SMART attribute parsing with device-class-specific logic
### Future Enhancement Opportunities
1. **Parallel data collection**: Use threading for faster cluster-wide analysis
2. **Historical trending**: Track scores over time to predict failures
3. **JSON output mode**: For integration with monitoring systems
4. **Cost-benefit analysis**: Factor in replacement drive costs
5. **PG rebalance impact**: Estimate data movement required
## Security Considerations
### Permissions Required
- Root access for `smartctl` execution
- SSH access to all OSD hosts
- Ceph admin keyring (read-only sufficient)
### Network Requirements
- Script assumes SSH connectivity between nodes
- No outbound internet access required (internal-only tool)
- Hardcoded internal git server URL: `http://10.10.10.63:3000`
### SSH Configuration
- Uses `-o StrictHostKeyChecking=no` for automated execution
- 5-second connection timeout to handle unreachable nodes
- Assumes key-based authentication is configured
## Related Infrastructure
**Internal Git Server**: `http://10.10.10.63:3000/LotusGuild/analyzeOSDs`
**Related Projects**:
- hwmonDaemon: Hardware monitoring daemon for continuous health checks
- Other LotusGuild infrastructure automation tools
## Maintenance
### Version Control
- Maintained in internal git repository
- One-line execution always pulls from `main` branch
- No formal versioning; latest commit is production
### Testing Checklist
- [ ] Test on cluster with mixed HDD/NVMe OSDs
- [ ] Verify SSH connectivity to all hosts
- [ ] Confirm SMART data retrieval for both device types
- [ ] Validate dm-device resolution on encrypted OSDs
- [ ] Check output formatting with various terminal widths
- [ ] Test `--class` and `--min-size` filtering
## Performance Characteristics
**Execution Time**: ~5-15 seconds per OSD depending on cluster size and SSH latency
**Bottlenecks**:
- Serial OSD processing (parallelization would help)
- SSH round-trip times for SMART data
- SMART data parsing can be slow for unresponsive drives
**Resource Usage**: Minimal CPU/memory, I/O bound on SSH operations
**Intended Audience**: LotusGuild infrastructure team
**Support**: Submit issues or pull requests to internal git repository

View File

@@ -60,8 +60,16 @@ Run directly from your internal git server:
```bash
sudo python3 -c "import urllib.request; exec(urllib.request.urlopen('http://10.10.10.63:3000/LotusGuild/analyzeOSDs/raw/branch/main/ceph_osd_analyzer.py').read().decode())"
```
Run directly from internal git server with debug enabled:
```bash
sudo python3 -c "import urllib.request; exec(urllib.request.urlopen('http://10.10.10.63:3000/LotusGuild/analyzeOSDs/raw/branch/main/ceph_osd_analyzer.py').read().decode())" --debug
```
Most common execution
```bash
sudo python3 -c "import urllib.request; exec(urllib.request.urlopen('http://10.10.10.63:3000/LotusGuild/analyzeOSDs/raw/branch/main/ceph_osd_analyzer.py').read().decode())" --debug --class hdd
```
### Traditional Installation
### Traditional Installation (not recommended)
```bash
# Clone repository

View File

@@ -82,66 +82,148 @@ def get_osd_host_mapping(osd_tree):
return osd_to_host
def get_device_path_for_osd(osd_id, hostname):
"""Get the device path for an OSD on a specific host"""
# Method 1: Try ceph metadata
"""Get the physical device path for an OSD on a host (resolve dm devices)."""
metadata = get_osd_metadata(osd_id)
if metadata:
devices = metadata.get('devices', '')
# Try 'bluestore_bdev_devices' first
phys_dev = metadata.get('bluestore_bdev_devices')
if phys_dev:
device = f"/dev/{phys_dev.strip()}"
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Found physical device from metadata: {device}{Colors.END}")
return device
# Also try devices field which sometimes has the info
devices = metadata.get('devices')
if devices:
device = devices.split(',')[0] if ',' in devices else devices
if device and not device.startswith('/dev/'):
device = f"/dev/{device}"
if device and device != '/dev/':
# devices might be comma-separated
first_dev = devices.split(',')[0].strip()
if first_dev and not first_dev.startswith('dm-'):
device = f"/dev/{first_dev}" if not first_dev.startswith('/dev/') else first_dev
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Found device from metadata: {device}{Colors.END}")
print(f"{Colors.GREEN}DEBUG: Found device from metadata.devices: {device}{Colors.END}")
return device
# Method 2: Query symlink on remote host
result = run_command(f"readlink -f /var/lib/ceph/osd/ceph-{osd_id}/block 2>/dev/null", host=hostname)
# Fallback: follow the symlink
result = run_command(f"readlink -f /var/lib/ceph/osd/ceph-{osd_id}/block", host=hostname)
if result and result.startswith('/dev/'):
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Found device from symlink: {result}{Colors.END}")
return result
# Method 3: Try lsblk
# Check if it is a dm device, try to find underlying
if '/dev/dm-' in result or '/dev/mapper/' in result:
# Try multiple methods to resolve dm device
base = run_command(f"lsblk -no pkname {result}", host=hostname)
if not base:
# Alternative: use ls -l on /dev/mapper
base = run_command(f"ls -l {result} | awk '{{print $NF}}' | xargs basename", host=hostname)
if base:
device = f"/dev/{base.strip()}"
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Resolved dm device {result} -> {device}{Colors.END}")
return device
else:
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Using device symlink {result}{Colors.END}")
return result
# Try alternative: lsblk with PKNAME (parent kernel name)
result = run_command(f"lsblk -no pkname /var/lib/ceph/osd/ceph-{osd_id}/block 2>/dev/null", host=hostname)
if result:
device = f"/dev/{result.strip()}"
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Found device from lsblk: {device}{Colors.END}")
print(f"{Colors.GREEN}DEBUG: Found device from lsblk pkname: {device}{Colors.END}")
return device
# Last resort: try to get from ceph-volume lvm list
result = run_command(f"ceph-volume lvm list | grep -A 20 'osd id.*{osd_id}' | grep 'devices' | awk '{{print $2}}'", host=hostname)
if result:
device = result.strip()
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Found device from ceph-volume: {device}{Colors.END}")
return device
if DEBUG:
print(f"{Colors.RED}DEBUG: Could not find device for osd.{osd_id}{Colors.END}")
print(f"{Colors.RED}DEBUG: Could not determine device for osd.{osd_id}{Colors.END}")
return None
def get_smart_data_remote(device_path, hostname):
"""Get SMART data from a remote host"""
"""Get SMART data from a remote host with multiple fallback methods"""
if not device_path:
return None
# Strip partition suffix
base_device = re.sub(r'p?\d+$', '', device_path)
# Determine device type
tran = run_command(f"lsblk -no tran {device_path} 2>/dev/null", host=hostname)
tran = tran.strip() if tran else ""
# Use sudo for smartctl
cmd = f"sudo smartctl -a -j {base_device} 2>/dev/null"
result = run_command(cmd, host=hostname, parse_json=True)
# Try different command variations based on device type
commands_to_try = []
return result
if tran == "nvme" or "nvme" in device_path:
commands_to_try = [
f"sudo smartctl -a -j {device_path} -d nvme",
f"smartctl -a -j {device_path} -d nvme", # Try without sudo
f"sudo smartctl -a -j {device_path}",
]
elif tran == "usb":
# USB-connected drives need special device type flags
commands_to_try = [
f"sudo smartctl -a -j {device_path} -d sat", # SAT (SCSI-ATA Translation)
f"sudo smartctl -a -j {device_path} -d usbjmicron", # JMicron USB bridge
f"sudo smartctl -a -j {device_path} -d usbcypress", # Cypress USB bridge
f"sudo smartctl -a -j {device_path} -d usb", # Generic USB
f"sudo smartctl -a -j {device_path} -d scsi", # SCSI passthrough
f"sudo smartctl -a -j {device_path}", # Auto-detect
]
elif tran == "sata":
commands_to_try = [
f"sudo smartctl -a -j {device_path}",
f"smartctl -a -j {device_path}",
f"sudo smartctl -a -j {device_path} -d ata",
]
else:
# Unknown or no transport, try generic approaches including USB
commands_to_try = [
f"sudo smartctl -a -j {device_path}",
f"smartctl -a -j {device_path}",
f"sudo smartctl -a -j {device_path} -d sat", # Try USB/SAT
f"sudo smartctl -a -j {device_path} -d auto",
]
# Try each command until one succeeds
for cmd in commands_to_try:
result = run_command(f"{cmd} 2>/dev/null", host=hostname, parse_json=True)
if result and ('ata_smart_attributes' in result or 'nvme_smart_health_information_log' in result):
if DEBUG:
print(f"{Colors.GREEN}DEBUG: SMART success with: {cmd}{Colors.END}")
return result
if DEBUG:
print(f"{Colors.RED}DEBUG: All SMART methods failed for {device_path} on {hostname}{Colors.END}")
print(f"{Colors.YELLOW}DEBUG: Transport type detected: {tran if tran else 'unknown'}{Colors.END}")
return None
def get_device_health(osd_id, hostname):
"""Get device SMART health metrics from the appropriate host"""
if DEBUG:
print(f"{Colors.CYAN}DEBUG: Getting health for osd.{osd_id} on {hostname}{Colors.END}")
# First try ceph's built-in health metrics
data = run_command(f"ceph device query-daemon-health-metrics osd.{osd_id} -f json 2>/dev/null", parse_json=True)
if data and ('ata_smart_attributes' in data or 'nvme_smart_health_information_log' in data):
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Got SMART data from ceph device query{Colors.END}")
return data
if data:
# Ceph returns data nested under device ID, extract it
if isinstance(data, dict) and len(data) > 0:
# Get the first (and usually only) device entry
device_data = next(iter(data.values())) if data else None
if device_data and ('ata_smart_attributes' in device_data or 'nvme_smart_health_information_log' in device_data):
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Got SMART data from ceph device query (nested format){Colors.END}")
return device_data
# Also check if data is already in the right format (backward compatibility)
if 'ata_smart_attributes' in data or 'nvme_smart_health_information_log' in data:
if DEBUG:
print(f"{Colors.GREEN}DEBUG: Got SMART data from ceph device query (direct format){Colors.END}")
return data
# If that fails, get device path and query via SSH
device_path = get_device_path_for_osd(osd_id, hostname)
@@ -161,9 +243,10 @@ def parse_smart_health(smart_data):
score = 100.0
issues = []
metrics = {}
if not smart_data:
return 50.0, ["No SMART data available"], metrics
# CRITICAL: Failed SMART reads are a red flag - could indicate drive issues
return 0.0, ["CRITICAL: No SMART data available - drive may be failing"], metrics
# Check for HDD SMART data
if 'ata_smart_attributes' in smart_data:
@@ -175,33 +258,39 @@ def parse_smart_health(smart_data):
value = attr.get('value', 0)
raw_value = attr.get('raw', {}).get('value', 0)
# Reallocated Sectors (5)
# Reallocated Sectors (5) - CRITICAL indicator of imminent failure
if attr_id == 5:
metrics['reallocated_sectors'] = raw_value
if raw_value > 0:
score -= min(20, raw_value * 2)
issues.append(f"Reallocated sectors: {raw_value}")
# ANY reallocated sectors is a severe problem
if raw_value >= 10:
score -= 95 # Drive is failing, near-zero health
elif raw_value >= 5:
score -= 85 # Critical failure imminent
else:
score -= 70 # Even 1-4 sectors is very serious
issues.append(f"CRITICAL: Reallocated sectors: {raw_value} - DRIVE FAILING")
# Spin Retry Count (10)
# Spin Retry Count (10) - CRITICAL
elif attr_id == 10:
metrics['spin_retry'] = raw_value
if raw_value > 0:
score -= min(15, raw_value * 3)
issues.append(f"Spin retry count: {raw_value}")
# Pending Sectors (197)
score -= min(40, raw_value * 10)
issues.append(f"CRITICAL: Spin retry count: {raw_value}")
# Pending Sectors (197) - CRITICAL
elif attr_id == 197:
metrics['pending_sectors'] = raw_value
if raw_value > 0:
score -= min(25, raw_value * 5)
issues.append(f"Pending sectors: {raw_value}")
# Uncorrectable Sectors (198)
score -= min(60, raw_value * 10)
issues.append(f"CRITICAL: Pending sectors: {raw_value}")
# Uncorrectable Sectors (198) - CRITICAL
elif attr_id == 198:
metrics['uncorrectable_sectors'] = raw_value
if raw_value > 0:
score -= min(30, raw_value * 5)
issues.append(f"Uncorrectable sectors: {raw_value}")
score -= min(70, raw_value * 15)
issues.append(f"CRITICAL: Uncorrectable sectors: {raw_value}")
# Temperature (190, 194)
elif attr_id in [190, 194]:
@@ -238,11 +327,11 @@ def parse_smart_health(smart_data):
score -= min(30, (pct_used - 80) * 1.5)
issues.append(f"High wear: {pct_used}%")
# Media errors
# Media errors - CRITICAL for NVMe
media_errors = nvme_health.get('media_errors', 0)
if media_errors > 0:
score -= min(25, media_errors * 5)
issues.append(f"Media errors: {media_errors}")
score -= min(60, media_errors * 10)
issues.append(f"CRITICAL: Media errors: {media_errors}")
# Temperature
temp = nvme_health.get('temperature', 0)
@@ -417,12 +506,35 @@ def analyze_cluster():
node, host_name, host_osds_map, osd_tree
)
# Calculate total score (weighted: 60% health, 30% capacity, 10% resilience)
total_score = (
(100 - health_score) * 0.60 + # Health is most important
capacity_score * 0.30 + # Capacity optimization
resilience_score * 0.10 # Cluster resilience
# Calculate total score with revised weights
# Priority: Failed drives > Small failing drives > Small drives > Any failing
has_health_issues = len(health_issues) > 0
has_critical_issues = any('CRITICAL:' in issue and ('Reallocated' in issue or 'Uncorrectable' in issue or 'Pending' in issue)
for issue in health_issues)
is_small = osd_df_data.get('crush_weight', 0) < 5
# Base scoring: 80% health, 15% capacity, 5% resilience
base_score = (
(100 - health_score) * 0.80 + # Health is critical
capacity_score * 0.15 + # Capacity matters for small drives
resilience_score * 0.05 # Cluster resilience (minor)
)
# Apply multipliers for priority combinations
if health_score == 0: # Failed SMART reads
if is_small:
base_score += 30 # Failed SMART + small = top priority
else:
base_score += 20 # Failed SMART alone is still critical
elif has_critical_issues: # Reallocated/pending/uncorrectable sectors
if is_small:
base_score += 25 # Critical issues + small drive
else:
base_score += 20 # Critical issues alone
elif has_health_issues and is_small:
base_score += 15 # Small + beginning to fail
total_score = min(100, base_score) # Cap at 100
candidates.append({
'osd_id': osd_id,