HALO — Interactive Demo
ANOMALY DETECTED
api-gateway
92%
lat 156mserr 2.8%
auth-service
43%
lat 9mserr 0.0%
payment-api
81%
lat 88mserr 3.8%
user-db
96%
lat 45mserr 3.5%
cache-redis
48%
lat 8mserr 0.4%
msg-queue
30%
lat 6mserr 0.0%
Request Latency — api-gateway
p50p99anomaly
ANOMALY ZONE
T+0:00
All 6 services healthy. Baseline metrics normal.
T+1:12
user-db connection pool climbing — 78% utilization.
T+2:04
api-gateway p99 latency 52ms (baseline: 30ms).
T+2:48
user-db pool exhausted at 98%. Connections refused.
T+3:06
api-gateway latency spike 4.2σ above baseline — 156ms.
T+3:24
payment-api error rate 3.8% — timeout to user-db.
T+4:30
HALO correlates 4 alerts → 1 incident. Root cause: user-db pool exhaustion.
T+5:15
⚙ Auto-remediation: scaling connection pool 50 → 200.
✓ Executing playbook
T+6:42
Incident resolved in 4m 12s. All services healthy.
MTTR: 4m 12s vs 47m avg
Root Cause
user-db
Connection pool exhaustion
Blast Radius
4
services impacted
CRITICALapi-gateway
CRITICALpayment-api
WARNINGcache-redis
SOURCEuser-db
Resolution
4m 12s
vs 47 min industry average
Remediation
Auto
Pool scaled 50 → 200
Correlated 4 alerts → 1 incident
Root cause: user-db connection pool exhaustion cascading to api-gateway, payment-api, cache-redis
4
Alerts
1
Incidents
87%
MTTR Reduction
94%
Noise Eliminated
6 min
Avg Resolution
$340K
Annual Savings