Groundcover is now integrated with Cased. If you run Kubernetes, you can connect your Groundcover account and let the agent investigate issues using real cluster data.
What Groundcover gives you
Groundcover is an eBPF-based observability platform built specifically for Kubernetes. Once connected, Cased can:
- List clusters and namespaces - See what’s running where
- Query workloads with metrics - RPS, latency, error rates per deployment
- Check node health - CPU, memory, capacity across your cluster
- Get resource details - Drill into specific deployments, nodes, or namespaces
This is useful during incidents when you need to quickly understand cluster state without switching to another dashboard.
Automated K8s error analysis
The real power is in workflow triggers. When a Groundcover alert fires, Cased can automatically start investigating:
When groundcover.alert.fired:
1. Classify the error (config issue, resource exhaustion, runtime problem)
2. Check cluster state - pods, events, logs
3. Find recent deployments or config changes
4. Identify root cause
5. Report to Slack with diagnosis and next steps
This catches things like:
- Secret name mismatches (the typo that brings down production)
- OOMKilled pods
- CrashLoopBackOff containers
- Resource quota violations
The agent can spawn a fix session for configuration errors, or suggest resource adjustments for capacity issues.
Part of a bigger picture
Groundcover joins Cased’s observability integrations that now include:
- Datadog - APM, infrastructure metrics, logs
- Honeycomb - Traces, queries, deployment markers
- Sentry - Error tracking with automatic triage
- Prometheus - Native metrics queries
- CloudWatch - AWS metrics and alarm creation
The idea is the same across all of them: when something breaks, the agent should be able to pull the relevant data and figure out what happened. No dashboard hunting.