Network Monitoring
How DevZero monitors Kubernetes network traffic to provide visibility into traffic patterns and costs.
Network Monitoring
DevZero's network monitoring operator (zxporter-netmon) runs on every node in your Kubernetes cluster, tracking pod-level network flows to give you visibility into traffic patterns and costs — without inspecting any application data or payloads.
How It Works
┌─────────────────────────────────────────────────────────┐
│ Kubernetes Node │
│ ┌───────────────────────────────────────────────────┐ │
│ │ zxporter-netmon (DaemonSet) │ │
│ │ │ │
│ │ 1. Collect network flows (conntrack / eBPF) │ │
│ │ 2. Trace DNS responses (eBPF) │ │
│ │ 3. Enrich with pod metadata (K8s informer) │ │
│ │ 4. Flush to DevZero platform every 60s │ │
│ └───────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ DevZero Platform │
│ │
│ 5. Resolve source & destination (Pod/Service/Node/ │
│ Endpoint/External) │
│ 6. Classify traffic type (same-AZ, cross-AZ, egress, │
│ peering) │
│ 7. Calculate cost using cloud provider rates │
│ 8. Store in time-series storage │
└─────────────────────────────────────────────────────────┘The operator collects delta-based metrics — only the traffic since the last flush interval is reported, keeping payloads small and efficient.
What Data Is Collected
| Field | Description |
|---|---|
| Source IP | IP address of the sending pod |
| Destination IP | IP address of the receiving pod, service, node, or external host |
| Protocol | TCP (6), UDP (17), etc. |
| Destination Port | Port number on the destination |
| TX Bytes / Packets | Bytes and packets transmitted by source |
| RX Bytes / Packets | Bytes and packets received in response |
| Source Pod Name & Namespace | Kubernetes metadata for the sending pod |
| Destination Pod Name & Namespace | Kubernetes metadata for the receiving pod (if internal) |
| DNS Lookups | Domain names resolved by pods, with resolved IPs |
| IP-to-Domain Mapping | Cached mapping of destination IPs to domain names |
What Is NOT Collected
- No payload inspection — only connection metadata (IPs, ports, byte counts)
- No application data — no HTTP headers, request bodies, or response content
- No secrets or environment variables — the operator has no access to pod specs
- Read-only — the operator makes no changes to your cluster
The operator source code is publicly available at github.com/devzero-inc/zxporter.
Technical Deep Dive
Collection Modes
The operator supports two collection backends:
Netfilter (conntrack) — The default mode. Reads the Linux kernel's connection tracking table via Netlink. This table tracks every TCP/UDP connection flowing through the node, including byte and packet counters. The operator enables conntrack accounting (nf_conntrack_acct) automatically.
eBPF — An alternative mode that attaches eBPF programs to kernel hooks for network flow capture. Provides the same data as netfilter but with lower overhead on high-traffic nodes. Requires kernel BTF (BPF Type Format) support.
Both modes produce the same output: a list of network flows with source/destination IPs, ports, protocols, and byte/packet counters.
DNS Tracing
Understanding where traffic is going requires more than just IP addresses. The operator uses an eBPF program attached to cgroup ingress to intercept DNS responses as they arrive at pods. This lets us build an IP-to-domain mapping without:
- Modifying CoreDNS or any cluster DNS configuration
- Intercepting or proxying DNS queries
- Inspecting application traffic
When a pod resolves api.stripe.com to 52.54.61.135, the operator records that mapping and includes it when flushing metrics. The DevZero platform uses this to label traffic with human-readable domain names.
DNS mappings are cached with a 2-minute TTL and refreshed as new lookups occur.
Pod Metadata Enrichment
The operator runs a Kubernetes informer filtered to pods on its node. When a network flow is observed, the source and destination IPs are correlated with pod metadata (name, namespace) before the metrics are sent to the platform.
DaemonSet Architecture
The operator runs as a DaemonSet with the following configuration:
hostNetwork: true— Required to see node-level conntrack entries- Privileged container — Required for conntrack and eBPF access
- Capabilities:
NET_ADMIN,SYS_RESOURCE— For conntrack table access and eBPF memory limits - Tolerations: all taints — Ensures the operator runs on every node, including control plane nodes
Resource footprint:
- Requests: 50m CPU, 64Mi memory
- Limits: 100m CPU, 128Mi memory
IPv6 support is not yet available. The operator currently tracks IPv4 connections only. If IPv6 monitoring is important to your environment, please let us know at support@devzero.io.
Destination Resolution
When metrics arrive at the DevZero platform, each destination IP is resolved through a cascading lookup:
- Pod — Is this IP assigned to a pod in the cluster?
- Service — Is this a Kubernetes ClusterIP service?
- Node — Is this a node IP?
- Endpoint — Is this an endpoint backing a service?
- External — If none of the above, classify by IP type (public vs. private)
Public IPs (Internet Egress)
If the destination IP is public (globally routable), the traffic is classified as internet egress. If the operator also captured a DNS lookup for that IP, the domain name is attached (e.g., api.stripe.com, s3.amazonaws.com).
Peering & Private Connectivity
Private IPs that don't match any cluster resource are checked against known cloud provider domain patterns:
| Pattern | Classification | Examples |
|---|---|---|
| VPC Endpoints / PrivateLink | private_link | vpce-*.vpce-svc-*.*.vpce.amazonaws.com, *.privatelink.*.azure.com |
| Kubernetes Control Plane | k8s_control_plane | *.eks.amazonaws.com, *.azmk8s.io, *.gke.googleapis.com |
| Cloud Service APIs | cloud_api | *.amazonaws.com, *.googleapis.com, *.vault.azure.net |
| Internal Load Balancers | load_balancer | internal-*.elb.amazonaws.com |
| Instance Metadata (IMDS) | cloud_api | 169.254.169.254 |
This classification is used for both labeling traffic in the dashboard and applying the correct cost rates. See Network Cost Model for details on how costs are calculated.