Overview
The Network Operator (aka "zxporter-netmon") monitors Kubernetes network traffic to provide visibility into traffic patterns and costs.
Network Operator (zxporter-netmon)
The Network Operator runs on every node in your Kubernetes cluster as a DaemonSet, tracking pod-level network flows to give you visibility into traffic patterns and costs — without inspecting any application data or payloads.
How It Works
zxporter-netmon (DaemonSet)
The operator collects delta-based metrics — only the traffic since the last flush interval is reported, keeping payloads small and efficient.
What Data Is Collected
| Field | Description |
|---|---|
| Source IP | IP address of the sending pod |
| Destination IP | IP address of the receiving pod, service, node, or external host |
| Protocol | TCP (6), UDP (17), etc. |
| Destination Port | Port number on the destination |
| TX Bytes / Packets | Bytes and packets transmitted by source |
| RX Bytes / Packets | Bytes and packets received in response |
| Source Pod Name & Namespace | Kubernetes metadata for the sending pod |
| Destination Pod Name & Namespace | Kubernetes metadata for the receiving pod (if internal) |
| DNS Lookups | Domain names resolved by pods, with resolved IPs |
| IP-to-Domain Mapping | Cached mapping of destination IPs to domain names |
What Is NOT Collected
- No payload inspection — only connection metadata (IPs, ports, byte counts)
- No application data — no HTTP headers, request bodies, or response content
- No secrets or environment variables — the operator has no access to pod specs
- Read-only — the operator makes no changes to your cluster
The operator source code is publicly available at github.com/devzero-inc/zxporter.
Technical Deep Dive
Collection Modes
The operator supports three collection backends, configured via the collector-mode flag:
Netfilter (conntrack) — The default mode. Reads the Linux kernel's connection tracking table via Netlink. This table tracks every TCP/UDP connection flowing through the node, including byte and packet counters. The operator enables conntrack accounting (nf_conntrack_acct) automatically.
eBPF — Attaches eBPF programs to cgroup ingress/egress hooks for network flow capture. Uses a double-buffered map for efficient collection. Provides the same data as netfilter but with lower overhead on high-traffic nodes. Requires kernel BTF (BPF Type Format) support.
Cilium — For clusters running Cilium CNI. Reads directly from Cilium's eBPF maps (/sys/fs/bpf/cilium/...) to collect flow data, avoiding duplicate instrumentation. Handles kernel time conversion (ktime, jiffies) automatically.
All three modes produce the same output: a list of network flows with source/destination IPs, ports, protocols, and byte/packet counters. See Configuration for how to select a mode.
DNS Tracing
Understanding where traffic is going requires more than just IP addresses. The operator uses an eBPF program attached to cgroup ingress to intercept DNS responses as they arrive at pods. This lets us build an IP-to-domain mapping without:
- Modifying CoreDNS or any cluster DNS configuration
- Intercepting or proxying DNS queries
- Inspecting application traffic
When a pod resolves api.stripe.com to 52.54.61.135, the operator records that mapping and includes it when flushing metrics. The DevZero platform uses this to label traffic with human-readable domain names.
DNS mappings are cached with a 2-minute TTL and refreshed as new lookups occur.
Pod Metadata Enrichment
The operator runs a Kubernetes informer filtered to pods on its node. When a network flow is observed, the source and destination IPs are correlated with pod metadata (name, namespace) before the metrics are sent to the platform.
DaemonSet Architecture
The operator runs as a DaemonSet with the following configuration:
hostNetwork: true— Required to see node-level conntrack entries- Privileged container — Required for conntrack and eBPF access
- Capabilities:
NET_ADMIN,SYS_RESOURCE— For conntrack table access and eBPF memory limits - Tolerations: all taints — Ensures the operator runs on every node, including control plane nodes
Resource footprint:
- Requests: 50m CPU, 64Mi memory
- Limits: 100m CPU, 128Mi memory
IPv6 support is not yet available. The operator currently tracks IPv4 connections only. If IPv6 monitoring is important to your environment, please let us know at support@devzero.io.
Destination Resolution
When metrics arrive at the DevZero platform, each destination IP is resolved through a cascading lookup:
- Pod — Is this IP assigned to a pod in the cluster?
- Service — Is this a Kubernetes ClusterIP service?
- Node — Is this a node IP?
- Endpoint — Is this an endpoint backing a service?
- External — If none of the above, classify by IP type (public vs. private)
Public IPs (Internet Egress)
If the destination IP is public (globally routable), the traffic is classified as internet egress. If the operator also captured a DNS lookup for that IP, the domain name is attached (e.g., api.stripe.com, s3.amazonaws.com).
Peering & Private Connectivity
Private IPs that don't match any cluster resource are checked against known cloud provider domain patterns:
| Pattern | Classification | Examples |
|---|---|---|
| VPC Endpoints / PrivateLink | private_link | vpce-*.vpce-svc-*.*.vpce.amazonaws.com, *.privatelink.*.azure.com |
| Kubernetes Control Plane | k8s_control_plane | *.eks.amazonaws.com, *.azmk8s.io, *.gke.googleapis.com |
| Cloud Service APIs | cloud_api | *.amazonaws.com, *.googleapis.com, *.vault.azure.net |
| Internal Load Balancers | load_balancer | internal-*.elb.amazonaws.com |
| Instance Metadata (IMDS) | cloud_api | 169.254.169.254 |
This classification is used for both labeling traffic in the dashboard and applying the correct cost rates. See Network Cost Model for details on how costs are calculated.
Installation
The Network Operator is installed as part of the zxporter Helm chart:
helm upgrade --install zxporter-netmon \
oci://registry-1.docker.io/devzeroinc/zxporter-netmon \
--namespace devzero-zxporter \
--create-namespace \
--set config.dakrUrl=https://dakr.devzero.io \
--set config.clusterToken=<your-cluster-token>To verify the DaemonSet is running on all nodes:
kubectl get pods -n devzero-zxporter -l app=zxporter-netmon -o wideHealth Monitoring
The operator exposes HTTP endpoints on port 8081:
| Endpoint | Purpose | Healthy Response |
|---|---|---|
/healthz | Liveness probe — monitor loop and control plane connection | 200 OK |
/readyz | Readiness probe — ready to collect and flush metrics | 200 OK |
/metrics | Current aggregated network flows and DNS lookups (JSON) | 200 OK |
Next Steps
- Configuration — collection mode, intervals, and environment variables
- Network Cost Model — traffic classification and cloud provider pricing