Migration from Karpenter
Zero-downtime migration from upstream Karpenter to the DevZero Node Operator with managed node policies.
Migrate from Karpenter
If your cluster already runs upstream Karpenter, you can migrate to the DevZero Node Operator with zero downtime. The DevZero Node Operator is fully compatible with existing Karpenter CRDs (NodePool, NodeClaim, EC2NodeClass), so your workloads continue running on existing nodes throughout the migration.
This guide covers migration from upstream Karpenter. If you're migrating from Cluster Autoscaler or static node groups, follow the setup guide for your cloud provider instead.
Why Migrate
| Capability | Upstream Karpenter | DevZero Node Operator |
|---|---|---|
| Node provisioning | Yes | Yes |
| Consolidation | Yes | Yes, with cost-aware optimization |
| Disruption | Yes | Yes, with workload classification-awareness |
| Checkpoint-restore | No | Yes, with dakr-op |
| Dashboard visibility | No | Full visibility in DevZero dashboard |
| Managed node policies | No | Yes -- configure from dashboard |
| Coordinated rightsizing | No | Yes -- works with dakr-op recommendations |
| Spot-to-spot consolidation | Manual flag | Enabled by default |
| Implicit PDB protection | No | Yes -- configurable minimum availability |
Prerequisites
- An EKS/AKS/GKE/OKE cluster running upstream Karpenter
kubectlaccess to the cluster- The DevZero Write Operator (dakr-op) installed
- Existing workloads have PodDisruptionBudgets configured
If your workloads do not have PodDisruptionBudgets configured, the migration can still proceed safely, but we strongly recommend adding PDBs before starting. The DevZero Node Operator includes implicit PDB protection as a safety net, but explicit PDBs give you more control.
Migration Steps
Record Current State
Capture your existing Karpenter configuration so you can replicate it in DevZero node policies.
# Save existing NodePools
kubectl get nodepools -o yaml > nodepools-backup.yaml
# Save existing EC2NodeClasses (AWS) or equivalent
kubectl get ec2nodeclasses -o yaml > ec2nodeclasses-backup.yaml
# Record current node count
kubectl get nodes -l karpenter.sh/nodepool -o wideNote the instance types, zones, labels, taints, and resource limits in your NodePools -- you'll configure matching settings in the DevZero dashboard.
Install the DevZero Node Operator
Install alongside your existing Karpenter installation. The DevZero Node Operator uses the same CRD group (karpenter.sh), so it will be installed as a replacement, not side-by-side.
The DevZero Node Operator replaces the upstream Karpenter controller. Your existing NodePool and NodeClaim resources remain intact -- only the controller managing them changes.
Scale down upstream Karpenter first:
# Find the Karpenter deployment
kubectl get deploy -A -l app.kubernetes.io/name=karpenter
# Scale it to zero (adjust namespace if needed)
kubectl scale deploy karpenter -n kube-system --replicas=0Install the DevZero Node Operator:
Follow the setup guide for your cloud provider:
The Helm install will replace the upstream controller while preserving all existing CRDs and resources.
Verify the Controller Swap
Confirm the DevZero Node Operator is running and has adopted your existing resources.
# Check controller pods are running
kubectl get pods -n kube-system -l app.kubernetes.io/name=karpenter
# Verify it recognizes existing NodePools
kubectl get nodepools
# Verify existing nodes are still managed
kubectl get nodeclaimsAll existing nodes should remain Ready. No pods should be disrupted at this point.
Create a DevZero Node Policy
In the DevZero dashboard, create a node policy that matches your existing NodePool configuration:
- Go to Optimization > Policies > Nodes
- Click Create Node Policy
- Configure instance types, zones, and resource limits to match your backup
- Set the target cluster
- Configure the IAM role / service account for your cloud provider
The node policy will sync with your cluster and manage the NodePool and NodeClass resources going forward.
# After policy creation, verify the resources are updated
kubectl describe nodepools
kubectl describe ec2nodeclasses # AWSRemove Upstream Karpenter
Once the DevZero Node Operator is stable and managing your nodes, remove the upstream Karpenter deployment.
# Delete the scaled-down upstream Karpenter release
helm uninstall karpenter -n kube-systemOnly uninstall the old release after verifying the DevZero Node Operator is healthy and managing nodes. Use kubectl get nodeclaims and kubectl get nodes to confirm.
Validate
Run through these checks to confirm the migration is complete:
# All nodes healthy
kubectl get nodes
# NodeClaims are being managed
kubectl get nodeclaims
# Controller logs show no errors
kubectl logs -n kube-system -l app.kubernetes.io/name=karpenter -c controller --tail=50
# Test scaling by creating a pending pod
kubectl run migration-test --image=nginx --requests='cpu=100m,memory=128Mi'
kubectl get pods migration-test -w # should schedule within seconds
kubectl delete pod migration-testThe DevZero dashboard should now show your nodes, utilization, and cost data under the cluster view.
Rollback
If you need to revert to upstream Karpenter:
- Scale down the DevZero Node Operator:
kubectl scale deploy karpenter -n kube-system --replicas=0 - Reinstall upstream Karpenter via its Helm chart
- Restore your backed-up NodePool and EC2NodeClass resources if they were modified
Existing nodes and workloads are unaffected during a controller swap -- only the reconciliation loop changes.