DevZero Logo
DevZero

Quickstart

A step-by-step guide on installing the zxporter operator into your cluster.

Connect your Kubernetes Cluster

You can connect your Kubernetes cluster to the DevZero platform by deploying the zxporter operator. This lightweight, read-only component powers real-time cost insights and optimization recommendations — without modifying your workloads.

Log into the DevZero Console

After logging into the DevZero Console, click the "Connect new cluster" button in the "Clusters" section to begin the setup process.

Your K8s Provider

Choose the environment where your Kubernetes cluster is running. DevZero supports:

  • Amazon EKS
  • Google GKE
  • Microsoft AKS
  • Other (self-managed or on-prem clusters)

After selecting your provider, copy the install command.

Install the operator

You’ll be provided a one-line script to deploy zxporter. Copy and run this script in a terminal with access to your Kubernetes cluster and kubectl configured.

📘 Note: zxporter is fully read-only. It does not access secrets or modify cluster resources. You can inspect the manifest before applying it for full transparency.

Validating the connection

Once installed, DevZero will automatically detect and connect your cluster. Within a few minutes, you’ll start receiving real-time cost insights and workload optimization suggestions.

View dashboard

You’re now ready to explore the DevZero platform and improve your cluster’s efficiency.

NVIDIA GPUs on K8s

GPU devices available on nodes

kubectl describe nodes | grep "nvidia.com/gpu"

There must be at least two lines (one for Allocatable and the other for Capacity) with nvidia.com/gpu with values like 1, 2, 3, ....

If there are no lines, that means neither the NVIDIA/gpu-operator nor the NVIDIA/k8s-device-plugin is installed in your cluster. You need to install the NVIDIA/gpu-operator before continuing.

gpu-operator installed in cluster

Only run if the previous step returned no values!

The instructions for this step are taken from NVIDIA's GPU Operator Docs.

Namespace and label for the gpu-operator

kubectl create ns gpu-operator
kubectl label --overwrite ns gpu-operator pod-security.kubernetes.io/enforce=privileged

Disable Node Feature Discovery (installed by the operator later)

kubectl get nodes -o json | jq '.items[].metadata.labels | keys | any(startswith("feature.node.kubernetes.io"))'

Install NVIDIA's gpu-operator

helm repo add nvidia https://helm.ngc.nvidia.com/nvidia && \
    helm repo update

helm install --wait --generate-name \
    -n gpu-operator --create-namespace \
    nvidia/gpu-operator \
    --version=v25.3.0

Validate existence of Datacenter GPU Manager (DCGM)

Run this to check status of the DCGM DaemonSet.

kubectl get daemonset -A | grep dcgm

Verification and validation for DCGM

GPU workloads on the dashboard

Go back to the dashboard and check out your GPU workloads!